Threat Modeling: Did we do a good job?

A deeper dive into the last question in threat modeling

Jamie Dicken
5 min readJul 25, 2022
A blue magnetic question mark is placed off-center on a pink gradient background.
Photo by Towfiqu barbhuiya on Unsplash

Threat modeling is the practice of analyzing a system, app, feature, process, or other “thing” to identify potential security threats and make decisions about how to address them. It’s a discipline I’ve really embraced in the past 18 months, and now I assist Adam Shostack in teaching his threat modeling courses at Shostack + Associates, Inc. (Yes, it’s awesome!)

In Adam’s Threat Modeling Intensive course, we threat model systems using Shostack’s 4 Question Frame for Threat Modeling. Those questions include:

  1. What are we working on?
  2. What can go wrong?
  3. What are we going to do about it?
  4. Did we do a good job?

In the most recent course, the full-class discussion made me think deeper about the last question. In the busyness of daily operations, it would be easy to short-change that question and declare yourself done with a threat modeling exercise once you identified some threats and made recommendations. It’s akin to skipping an incident post-mortem or a sprint retrospective: easy to do, tempting in the moment, but detrimental in the long run.

Sources of Feedback

There are several opportunities for feedback that could help answer whether or not we did a good job.

You could simply step back, review the threat model, and decide if you gave enough consideration to each piece of the design. For those using a STRIDE-by-element approach, if you covered each element, you may conclude you did a good job. This feedback loop is short; it can be answered within minutes of finalizing your answers to the first three questions.

On the other extreme, you could monitor your system in production and analyze the security incidents opened (or lack thereof, maybe indicating you did a good job). There’s certainly some value in this approach: successful threats against your applications can inform future threat modeling efforts. That said, this can make for a long feedback loop, measured in either months or years.

Neither method is perfect (no method is), and either leaves room for false assumptions to arise. However, either approach is certainly better than nothing.

Make it Real

But to me, the answer to if we did a good job harkens back to the reason we threat model in the first place: to make our systems more secure and resilient. It’s not trivial to understand what we’re working on, what could go wrong, and what we’re going to do about it. However, our efforts are wasted if all that work remains purely theoretical and mitigations go unimplemented.

Therefore, a more useful method to determining if we did a good job is to evaluate if our security recommendations became reality.

Anyone who’s been in systems engineering knows that most projects don’t go according to plan. Scope gets cut as timelines compress. Requirements change or are miscommunicated. Engineers underestimate work. New bugs emerge and must be addressed. Unexpected dependencies arise. Mistakes are made.

While security engineers can’t control all of those things, there are at least two key things we can do to improve the chances that our efforts directly translate into better security outcomes.

Make sure outputs are documented where engineers source their work

Typically in threat modeling, I see folks create a list or table of potential threats and risk treatment decisions. Those types of views work well to organize the threat modeling effort, but we must recognize that engineers ultimately don’t source their work from various documents. If engineers at your company organize their work in Jira, you need to make sure someone takes responsibility for entering the threat modeling recommendations in Jira too. Otherwise, the agreements you made will get lost.

You don’t necessarily have to stress about the details, like whether a mitigation warrants its own epic or task or can be added as a requirement to an existing story. As long as it’s documented where the engineers source their work and project managers track scope, you stand a better chance of it getting implemented than you did with your table alone.

Align on the expected results

How you communicate security requirements can also make the difference between the threat being addressed or not.

For example, say you identify a SQL injection attack as a possible threat and suggest “using secure coding best practices” as a mitigation. I’m willing to bet that the engineering team won’t see that as a helpful recommendation. Instead, they may say using best practices is simply something they do every day and therefore unworthy of any additional tracking. No one enters any details in the ticketing system, no one writes the code to prevent the attack, no one thinks to test the undocumented requirement, and your system becomes more insecure. If this happens, the harsh reality is your threat identification and planning efforts were in vain.

Therefore, it’s important to articulate requirements clearly and align on the definition of done. In this case, perhaps you suggest parameterizing the query in question or implementing both client- and server-side input validation to allow only alpha-numeric characters. Whatever you agree is best, specificity will prevent you from making false assumptions about the work the engineering team plans to execute.

And of course, no documented requirement is any good without corresponding tests to validate the code actually meets the agreed-upon expectations. While you may not be the person performing the testing, you should make sure engineers understand the importance of testing security requirements like any other functional requirements.

Conclusion

Of course, there are additional factors that impact whether the results of our threat modeling recommendations become reality. However, the push to implement mitigation suggestions begins with us as security teams. We conduct threat modeling exercises to help engineering teams anticipate potential threats and address them before their systems go to production. Therefore, we must work within their processes, support their project planning and scoping activities, communicate actionable expectations, and define validation criteria. If we do these things, we stand a better chance of lifting our threat models off the paper (or screen) and building systems that more resilient to threats. This lets us achieve security in practice, not just in theory.

Then we can truly say we did a good job.

--

--

Jamie Dicken

Cybersecurity leader | Product Security | Software Engineering | Teacher | Writer | Mentor | DE&I Advocate | Boy Mom | Own views | She/her/hers