Communicating risk across complex teams
Using threat modeling techniques for organizational risk planning.
Threat modeling is a process for decomposing something complex and articulating and organizing threats. We can use these lessons to unify an organization towards explicit risks.
I have recently focused on the output of threat modeling as intentionally phrased “scenarios”, which can often look like an intellectual prompt or rhetorical statement, often indistinguishable from tabletop exercises.
A powerful, and portable language of risk results from threat scenarios. For example:
With a relatively small bundle of these statements, you can address the risks of a complex organization by zooming deep into a system or zooming out towards leadership with varying scenarios.
Let’s discuss the skills from threat modeling that are useful for organizational planning.
Become strict with the language of threat scenarios.
Threat scenarios are intellectual prompts that should make you panic a bit. We will use them as building blocks that represent concerns.
A well written scenario should make you feel like you’ve failed and you should feel directly responsible for the failure. But, there is effort to be made in scoping a scenario to comfortably address the areas of risk that you care about.
A guideline, but not a hard rule, is:
A THREAT has taken an ACTION and will cause an IMPACT.
A physical security example would be “An individual with a weapon has bypassed our reception desk.”
An application security example would be: “An IDOR is exploited on our platform that reveals customer owned data.”
These scenarios are better when they’re “after the fact” or “in progress”, but they can also be “is preparation”. Example: “An adversary is hunting for targets on LinkedIn to prepare an attack.”
Get a rough scope together for the risks you want to address.
You should settle on a scope for your threat model. Maybe you’ll focus on competitive risks to the business. Or disaster recovery. Or remote technical threats. Physical threats. Insiders. Media crisis. Nuclear war! Up to you.
This scope is needed to decompose any complexity that falls within your scope. You’ll need to develop an understanding of the system you’re concerned about, or recruit the know-it-alls who can. Your approach to decomposition can vary wildly. Here are some examples.
- An organization can be understood with mission statements, leadership interviews, and an enumeration of policies and processes.
- A complex codebase might require a review of source code or architecture review. Lots of whiteboarding.
- Competitive threats might require as much adversarial intelligence that can be made available to you.
This will get you to a place where you can start proposing and collecting threat scenarios, which will be our building blocks of risk. At an organizational level, I do this through interviewing. I do my best to nurture a brainstorming process and tease out scenarios.
- If the subject is thinking in terms of impact: “We lose the data”, then I ask them to fictionalize how it could be lost, and what recovery looks like.
- If the subject is thinking in terms of adversary: “If the hacker had access”, then I ask them to fictionalize what they could do.
- If the subject is thinking in terms of vulnerability: “The thing hasn’t been fixed yet”, then I ask them to fictionalize who would abuse it, and how, to what impact.
I strongly suggest showing scenarios from other interviews to new subjects, and meet different perspectives and expertise along the way. Subjects will disagree on damage potential or modify them to become scarier or more explicit.
It’s also useful to discuss headlines from peer incidents and really dive into what the tangible impacts would be. This is where tabletop exercises are disguised as simple conversations.
For instance, if I talk to a cryptocurrency organization, I often cite root causes in the Blockchain Graveyard. This draws a participant into a discussion about how they’d fare from similar, underlying root causes.
A technical scope will likely require a thorough decomposition of its components or a more thorough interview process.
Scope your scenarios into parents and children.
You should have a small amount of high level scenarios that represent categorical risks, followed by specific scenarios that represent more focused concerns that account for more of your actual efforts. These are sometimes called “threat trees” when laid out in a hierarchy of scenarios.
Here is what some high level scenarios for executive protection could look like:
- An adversary with a weapon comes within lethal proximity of the VIP.
- An adversary has sabotaged the area a VIP is entering.
- An adversary is able to eavesdrop on the VIP’s conversations.
Underneath the “eavesdropping” scenario, you may have a specific risk that is high likelihood, has happened before, or are current priorities for the team. For example:
- 3.1: A journalist sneaks into tomorrow’s customer meeting and records the VIP’s conversations.
- 3.2: A conference attendee shoulder surfs the VIP while they use their laptop.
- 3.2: A hotel has allowed the VIP’s room to be bugged before VIP’s arrival.
These specific scenarios should represent the common problems an executive protection team may have to frequently deal with. But, they do not exclude the parent scenarios. Should a new threat occur, it should fall nicely into a parent scenario. The responsibilities driven by the parent scenario should not change, but the child scenarios may fluctuate based on current priority, new threats, or completed mitigations.
Child scenarios should change often for situational issues, but better planning should ensure a parent scenario that was able to predict its existence.
In an ideal world, you will have parent scenarios that are as close as possible to preventing or responding to future incidents.
Here is an example interview process with a resulting scenario.
A memorable conversation of mine included the warehouse a startup online retailer depended on. One interview subject said “The warehouse goes offline” as a disastrous, six figures of loss per hour disaster scenario.
Another interview subject, when confronted with this same scenario, mentioned “this actually happens fairly often. I would say ‘The warehouse goes offline for three days’ would not be recoverable.” This surprised everyone.
We landed on “The warehouse goes offline for more than two days” as a core scenario. Incident response and escalation procedures were roadmapped for the next quarter to reduce this possibility by putting overtime on other warehouses instantaneously.
In some ways, preventative measures relaxed, but in others, they toughened up. Short term outages were no big deal. Multi-day outages was focused on instead.
We decided against including any terms of adversary or vulnerability out of the scenario, because they didn’t matter. Any means that would have resulted in that length of impact would matter, and ultimately some mitigations should be agnostic to adversary or threat.
Removing the specifics accounted for the most “unknown” with resulting mitigations.
There were a some specific situations that would have exacerbated the process of bringing the warehouse back to full capacity. Those threats and adversaries were defined in scenarios, but an agnostic focus to the warehouse outage was never out of scope.
Make your parent scenarios well scoped to reduce complexity.
You may come across long lists of specific threats, which you’ll become more skilled at abstracting into groups. Here’s are some I come across frequently.
- An attacker causes our web application to misbehave and expose a customer’s data.
- An attacker moves laterally from our corporate IT environment towards production systems.
Underneath these scenario are potentially hundreds of thousands of more specific scenarios. Every type of known vulnerability class and exploit technique in an application or infrastructure could balloon the amount of scenarios. Thousands of attack pathways and varying threat actors for each.
At higher levels, you will likely not need to enumerate every single technique. You can find broad strokes that cover them quickly, instead. For instance:
- An attacker exploits a well documented vulnerability and harms our users.
- An attacker takes advantage of a known lateral movement technique.
Then, what is leftover may be areas of above-normal focus, due to active incidents or an larger potential for loss. You may want to call out specific areas in those constellations in particular. For instance, a social network might hold a special prioritization around XSS.
- An attacker discovers a cross site scripting vulnerability and creates a self-propagating worm resulting in outage and customer data destruction.
This ability to abstract large swaths of threats will help avoid infinitely long risk documentation. With the above three scenarios, you can scope a traditional application security team that is focused on eliminating known, obvious vulnerabilities, in addition to very specific risk around cross site scripting.
Allowing for properly abstracted threats will also help avoid huge amount of documentation on teams you are requesting scenarios from. No one wants to enumerate every aspect of every scenario they consider, which is a common way a large “matrix” approach can contribute to.
Identifying shared risks across complex organizations.
After a while, you might start thinking of this hierarchy in a visual way.
An interesting characteristic here is how a legal team, or communications crisis team can offer assistance at higher levels. They will eventually require the specifics about most scenarios, but they mostly care that an adversary got access to customer data and would likely be able to plan for specific follow up items, regardless of the particulars of an underlying scenario.
A more specific example: A lawyer might not be able to prevent any specific type of intrusion, but they can certainly assist with any intrusion. They’ll easily pick out their areas of concern in varying areas of depth of a threat tree.
If done well, they’ll point to specific depth in a threat tree and say “This is approximately where I’d want to be involved”.
This attitude captures unknown risk in that they should be helpful for various specific breach scenarios that weren’t predictable, but ultimately resulted in a predictable overall scenario where some broad mitigations will exist.
Identify limitations to your scenarios.
Everyone has limits to the risks their own capabilities are able to address.
- The VIP overdoses on cocaine.
- The VIP’s fast food diet has resulted in a heart attack.
- The VIP has intentionally given confidential information to competition.
An executive protection team might not consider these to be in their responsibility. But what if their mission statement is “Keep the VIP safe by any means necessary”?
In that situation, we have found a mismatch.
Do you also have similar scenarios that represent where a jurisdiction ends?
This can help clarify a team’s mission, by making them more inclusive or eliminating conflicts with other teams by precisely articulating risk.
A well built threat tree has a variety of interesting benefits.
A team can build consensus on what scenarios require the most amount of time investment. This can be as simple as a vote, and relying on the law of averages. It may act as influence for overall resources, headcount, or funding.
You can build cross functional OKR’s around scenarios, or guide audits and penetration tests with these scenarios explicitly defined. You now have an explicit tool to criticize efforts that are low leverage and guided by personal interests instead of risk. You also gain a shortcut to communicate a technical task to people who are unfamiliar with your specific subject matter.
IE, you can show how a project in security awareness has mitigating factors across many risk areas, if you can map them to many scenarios they may mitigate, upward towards a risk that your leadership cares deeply about preventing. This method may properly communicate the leverage it would have, assuming good intentions and a good proposal.
For any area of risk where there is uncertainty or inability to find consensus, you can use the scenario for its original purpose… a tabletop exercise. This helps flesh out the realities involved with a certain risk with the perspectives of your subject matter experts.
A focus on scenarios creates opportunities in decision science.
This method of organization is used elsewhere. Periodic trends were discovered by organizing elements in an intentional way that lended itself to prediction. Then we discovered new elements that we already predicted.
We can organize a threat model in a way that predicts and mitigates unknown risk. Mitigation of the unknown is probably one of the trickiest areas to prioritize security around, and this approach to threat modeling helps define fences around areas of uncertainty.
The science around expert forecasting is being spoken about more frequently in security. By making “scenarios” forefront in our threat model, we become more compatible with well established decision science and forecasting topics.
Organizing a team around explicit threat scenarios allows for scientific prioritization of organizational mitigations, as opposed to comparing vague industry areas like “Application Security” and “Physical Security” and trying to determine which is more important based on a finger in the air.
With a clear and common risk language among leadership and teams, we can better define the risks we work on and direct complex work across organizations towards shared mitigations.
Ryan McGeehan writes about security on medium.