“Social threat modeling”: name subject to change

Risk: Impact, Possibility, and Ease of Exploitation

Threat modeling is a structured approach to looking at security threats — and what can be done in response. EFF’s Assessing Your Risks describes how people wanting to keep their data safe online can do threat modeling, starting with questions like “what do I want to protect?” and “who do I want to protect it from?” Threat modeling is also an important software engineering technique, and it’s that aspect I’m going to focus on here.

When a company takes threat modeling seriously as part of an overall security development process, it can have a huge impact. I saw first-hand working with the Windows Security team back when I was at Microsoft Research in the early 2000s, and things have come a long way since then. Today there are books, checklists, tutorials, tools, and even games about how to do it well (although there are still plenty of companies who prefer to ignore the risks).

Even for companies that do practice it, threat modeling today generally has a rather selective focus. As Amanda Levendowski points out in Conflict Modeling:

In the security and privacy contexts, threat modeling developed as a predictable methodology to recognize and analyze technical shortcomings of software systems. And when compared with security and privacy threat modeling, systems have lagged in developing similarly consistent, robust approaches to online conflict.

Indeed. The Open Web Application Security Project’s Application Threat Modeling page discusses things like decomposing the application into components, identify the data that need to be protected, and focusing on trust boundaries between running processes. It doesn’t have much at all to say about the people who are in the system. There’s similarly no mention of important categories of other social and user harms like online conflict, harassment, computational propaganda, and influencing elections.

Threat model for different ways of harrassing people
Simplified threat model with different approaches to harassment

To be clear: this isn’t a fundamental limitation of threat modeling, it’s just what’s been emphasized to date. And although the work’s still at a relatively early stage, several people are working on extending threat modeling or similar techniques to these social threats.

There isn’t yet a good name for this overall approach. I’m calling it “social threat modeling” for now, but as Shireen Mitchell of Stop Online Violence Against Women points out that’s only one aspect of it. It starts with people (not the technology), and involves looking at things from the perspective of different identities. Until somebody

Whatever you call it, though, there’s a steadily-growing body of promising work here. A few examples:*

One interesting aspect to this work is that it’s largely being presented outside of the mainstream security and software engineering world. In the more traditional “tech space”, it’s striking how little attention is getting paid to this issue. Twitter, Facebook, and Google spend zillions of dollars a year (and publish bunches of research papers) on AI; how much have they invested here? And the red-hot blockchain world has no shortage of money either, and a golden chance to get things right from much earlier on, but other than Kaliya, very few of the people I talked to at the recent Internet Identity Workshop were even thinking about this kind of approach.

And the results speak for themselves. Twitter is toxic; their latest attempt to deal with it is likely to fail as miserably as all their previous ones. Facebook is frantically trying to close the barn door after the elections were stolen by making wildly implausible claims about how they’ll use AI to fix everything in 5–10 years. As Safiya Umoja Noble, Ph.D. summed it up at the Data Society conference, “if you’re designing technology for society, and you don’t know anything about society, you’re unqualified.

Still, the winds of change are in the air. The UN is discussing Facebook’s role in genocides, Amnesty International is reporting on Toxic Twitter, and Safiya Umoja Noble, Ph.D.’s outstanding Algorithms of Oppression is getting excerpted in Time Magazine. More and more people are seeing computer science as a social science, and coming around to a point that Zeynep Tufekci, AnthroPunk, Ph.D., and others have been making for quite a while: software companies need to get anthropologists, sociologists and other social scientists involved in the process. As Window Snyder (co-author of a 2004 book on threat modeling and now chief security officer at Fastly) said at the recent OurSA conference, “the industry changes when we change it.”

So I expect we’ll be seeing a lot more attention to this area over the next few months. It’ll be interesting to see which companies gets ahead of the curve.

* If there’s other work that should be in this list, please let me know!


Image credits:


Updates:

May 15: clarifying that it’s anthropologists, sociologists, and other scientists who need to be involved
May 17: some additional references and minor rephrasing.
August 8: changed title, included paragraph on the name