Threat modeling is a structured approach to looking at security threats — and what can be done in response. EFF’s Assessing Your Risks describes how people wanting to keep their data safe online can do threat modeling, starting with questions like “what do I want to protect?” and “who do I want to protect it from?” Threat modeling is also an important software engineering technique, and it’s that aspect I’m going to focus on here.
When a company takes threat modeling seriously as part of an overall security development process, it can have a huge impact — the comments on Dan Kaminkski’s Summer of Worms Facebook thread highlight how this played out at Microsoft in the early 2000s. Things have come a long way since then. Today there are books, checklists, tutorials, tools, and even games about how to do it well (although there are still plenty of companies who prefer to ignore the risks).
In the security and privacy contexts, threat modeling developed as a predictable methodology to recognize and analyze technical shortcomings of software systems. And when compared with security and privacy threat modeling, systems have lagged in developing similarly consistent, robust approaches to online conflict.
Indeed. The Open Web Application Security Project’s Application Threat Modeling page discusses things like decomposing the application into components, identify the data that need to be protected, and focusing on trust boundaries between running processes. It doesn’t have much at all to say about the people who are in the system. There’s similarly no mention of important categories of other social and user harms like online conflict, harassment, computational propaganda, and influencing elections.
To be clear: this isn’t a fundamental limitation of threat modeling, it’s just what’s been emphasized to date. And although the work’s still at a relatively early stage, several people are working on extending threat modeling or similar techniques to these social threats.
There isn’t yet a good name for this overall approach. I’m calling it “social threat modeling” for now, but as Shireen Mitchell of Stop Online Violence Against Women points out that’s only one aspect of it. It starts with people (not the technology), and involves looking at things from the perspective of different identities.
Whatever you call it, though, there’s a steadily-growing body of promising work here. A few examples:*
- Susan Herring et. al.’s Searching for Safety Online: Managing “Trolling” in a Feminist Forum, in The information society (2002), analyzes the strategies that made a troller successful and the targeted group largely ineffectual in responding to his attack, as a means to understand how such behavior might be minimized and managed in general. Frances Shaw’s Still ‘Searching for Safety Online’: collective strategies and discursive resistance to trolling and harassment in a feminist network, in The Fibreculture Journal, from 2013, looks at similar dynamics in a network of blogs.
- Robert Meyer and Michel Cukier’s Assessing the Attack Threat due to IRC Channels, in Dependable Systems and Networks, 2006, uses a combination of bots and regular users in IRC chat, and the social structure of IRC channels, to investigate
- Borja Sanz et al.’s A threat model approach to attacks and countermeasures in on-line social networks, , in Proceedings of the 11th Reunion Espanola de Criptografıa y Seguridad de la Información (RECSI), focuses on identifying attacks against users of online social networks and possible countermeasures to mitigate the risks
- Leigh Honeywell’s Another Six Weeks: Muting vs. Blocking and the Wolf Whistles of the Internet on Model View Culture analyzes different kinds of attackers and their motiviations in the context of somebadly-thought-out functionality. “In attempting to solve the problem of users being retaliated against for blocking, Twitter missed other ways that harassers operate on their service.”
- Mozilla’s Coral Project applies a threat modeling perspective to online communities. caroline sinders of the Coral project briefly talks about threat modeling’s application to harassment in SXSW canceled panels: Here is what happened, from 2016
- Amanda Levendowski describes Conflict Modeling as “a predictable framework to structure thinking around online conflict by suggesting a methodology for conflict modeling, defining a taxonomy of conflict — safety, comfort, usability, legal, privacy, and transparency (SCULPT) — and examining common mitigation techniques adopted by systems to reduce the risk of certain conflicts.” A draft was presented at the 2017 Privacy Law Scholars Conference,; as far as I know, the only public information is on her web site
- Shireen Mitchell and I suggested applying threat modeling techniques to online harassment in our 2017 SXSW talk on Diversity-friendly Software. I went into a little more detail in Transforming Tech with Diversity-Friendly Software (the slides have a short example) and worked with Kelly Ireland at O.school applying this approach to their pleasure education platform; Shireen is working with Kaliya-IdentityWoman on applying a generalized threat modeling approach to social and user harms in the self-sovereign ID world.
- Casey Fiesler’s “speculative harm analysis” of Twitter’s audio tweets illustrates how just a few minutes of analysis can identify major opportunities for abuse in a new product feature.
One interesting aspect to this work is that it’s largely being presented outside of the mainstream security and software engineering world. In the more traditional “tech space”, it’s striking how little attention is getting paid to this issue. Twitter, Facebook, and Google spend zillions of dollars a year (and publish bunches of research papers) on AI; how much have they invested here? And the red-hot blockchain world has no shortage of money either, and a golden chance to get things right from much earlier on, but other than Kaliya, very few of the people I talked to at the recent Internet Identity Workshop were even thinking about this kind of approach.
And the results speak for themselves. Twitter is toxic; their latest attempt to deal with it is likely to fail as miserably as all their previous ones. Facebook is frantically trying to close the barn door after the elections were stolen by making wildly implausible claims about how they’ll use AI to fix everything in 5–10 years. As Safiya Umoja Noble, Ph.D. summed it up at the Data Society conference, “if you’re designing technology for society, and you don’t know anything about society, you’re unqualified.”
Still, the winds of change are in the air. The UN is discussing Facebook’s role in genocides, Amnesty International is reporting on Toxic Twitter, and Safiya Umoja Noble, Ph.D.’s outstanding Algorithms of Oppression is getting excerpted in Time Magazine. More and more people are seeing computer science as a social science, and coming around to a point that Zeynep Tufekci, AnthroPunk, Ph.D., and others have been making for quite a while: software companies need to get anthropologists, sociologists and other social scientists involved in the process. As Window Snyder (co-author of a 2004 book on threat modeling and now chief security officer at Fastly) said at the recent OurSA conference, “the industry changes when we change it.”
So I expect we’ll be seeing a lot more attention to this area over the next few months. It’ll be interesting to see which companies gets ahead of the curve.
* If there’s other work that should be in this list, please let me know!
- Microsoft DREAD risk-ranking model from OWASP’s Application Threat Modeling page
- Simplified threat model for harassment originally by Shireen Mitchell and me on a napkin, refined with help from Kelly Ireland and presented at Transforming Tech with Diversity-friendly software
May 15: clarifying that it’s anthropologists, sociologists, and other scientists who need to be involved
May 17: some additional references and minor rephrasing.
August 8: changed title, included paragraph on the name
June 2020: added Casey Fiesler’s analysis of audio tweets