There’s been a lot of talk over the last week about “updating threat models” in light of the Tesla insider story. (For example.) I’m getting this question a fair bit, and so wanted to talk about insiders in particular, and how to use the news in threat modeling more generally. This also is a great opportunity to think about incentives.
So first: the story is that a Russian gang approached a Tesla employee and offered $1 million to plant malware. So: should you update your threat models?
The first question to ask is “do your threat models already include insiders?” They should. Many people don’t like to talk about insiders. They don’t want to think that Bob is going to turn against them, and that’s natural. But “insiders” can be framed as a focus on an the attacker who’s used a phishing link to steal credentials, or an attachment to run code inside your soft, gooey interior. If Bob can go wild inside your systems, Yuri can use Bob’s account in the same ways. …
Mark Rasch, who created the Computer Crime Unit at the United States Department of Justice, has an essay, “Conceal and Fail to Report — The Uber CSO Indictment.”
The case is causing great consternation in the InfoSec community partly because it is the first instance in which a CSO or CISO has been personally held responsible (other than by firing) for a data breach response, and the first time that criminal sanctions of any kind have been sought against the corporate victim of a data breach for handling (mishandling) the data breach itself.
Mark spends a lot of energy explaining the law of the case and some of the subtleties, for example: “It’s also clear that Uber and Sullivan did not want the FTC to know about the 2017 breach. But I’m not sure that, as a matter of law, this constitutes “misrepresenting, concealing or falsifying” materials actually produced to the FTC.” As someone who does expert witness work now and again, I’ve learned to recognize skilled analysis, and this is skilled analysis, the kind you’d want on your side, especially if you’re one of those with great concern. …
I’m happy to announce Shostack & Associate’s new, first, corporate white paper!It uses Jenga to explain why threat modeling efforts fail so often.
I’m excited for a lot of reasons. I care about learning from failure. I love games as teaching tools. But really, I’m excited because the paper has helped the people who read early copies.
It’s also exciting because as it turns out, the Jenga metaphor is way bigger than threat modeling. I’m talking about threat modeling because people tell me that’s what they want to hear about, but really, threat modeling requires culture change. …
I want to call out some impressive aspects of a report by Proofpoint: TA410: The Group Behind LookBack Attacks Against U.S. Utilities Sector Returns with New Malware.
There are many praise-worthy aspects of this report, starting from the amazing lack of hyperbole, and the focus on facts, rather than opinions. The extraordinary lack of adjectives is particularly refreshing, as is the presence of explanations for the conclusions drawn. (“This conclusion is based on the threat actor’s use of shared attachment macros, malware installation techniques, and overlapping delivery infrastructure.”)
But most important to me is the clear and detailed exposition of how the attack itself worked. Proofpoint shared both sample emails, showing the human-level hooks, and the way the attacks worked (“Microsoft Word documents with malicious macros…the FlowCloud macro used privacy enhanced mail (“.pem”) files which were subsequently renamed to the text file “pense1.txt”. This file is next saved as a portable executable file named “gup.exe” and executed using a version of the certutil.exe …
“As security professionals, have we ever sat down and truly made an effort to empirically determine what controls are actually effective in our environment and what controls do very little to protect our environment or, worse yet, actually work to undermine our security.”
That’s from The Need for Evidence Based Security, by Chris Frenz, is worth reading.
His focus on moving from compliance with untested standards to demonstrating effectiveness is very welcome, and I appreciate the tie to evidence based medicine for his audience.
Go have a look.
I’ve spoken for over a decade against “think like an attacker” and the trap of starting to threat model with a list of attackers. And for my threat modeling book, I cataloged every serious grouping of attackers that I was able to find. And as I was reading “12 Ingenious iOS Screen Time Hacks,” I realized what they’re all missing: kids.
I’ll be speaking at OWASP Portland (Oregon) Oct 9.
Wow. Blackhat, Defcon, I didn’t make any of the other conferences going on in Vegas. And coming back it seems like there’s a sea of things to follow up on. A little bit of organization is helping me manage better this year, and so I thought I’d share what’s in my post-conference toolbox. I’m also sharing because I don’t think my workflow is optimal, and would love to learn how you’re working through this in 2019, with its profusion of ways to stay in touch.
I’ve added a new first step relative to last year, which is to write a trip report, for myself. It captures who I talked to, impressions, followup, and value of the event.
Read the rest at:
The paper focuses on the value of the IMPACT data sharing platform at DHS, and how the availability of data shapes the research that’s done.
On its way to that valuation, a very useful contribution of the paper is the analysis of types of research data which exist, and the purposes for which it can be used…