When everything’s on fire, where do you point the hoses?
Risk is just something that happens in life. We drive, we could crash; we cook, we might burn ourselves. Risk is also something that’s been big in my life, although for me it’s not so much managing the things that could go wrong when I work and travel, but more a lifetime of studying risks in different contexts, and working out how to predict, manage, mitigate, and recover from them (if you’re curious, I’ve worked on everything from transport safety to risk-based command decision making).
Risk isn’t just the bad thing that can happen. It’s thinking about how bad, how likely, who to, how long for. The classic definition of risk is likelihood multiplied by severity, but that misses things like how tolerant the people managing that risk are to it, and who, if it’s not their own risk, it’s being managed for. Individually people have different risk tolerances (a few of my friends have free-climbed mountains, but very few: others sensibly used gear, didn’t go up the climbing routes, or stayed off mountains altogether).
Modelling risk tolerances gets us into game theory, where behaviours are bucketed into risk averse, risk neutral, and risk acceptant — seeing the risk but wanting the greater possibility of reward. [Pro tip: if you’re reading something academic about risk and they don’t mention these categories, it’s a safe bet to assume the models are based on risk neutral behaviours]. And one thing that game theory gives us is a way to talk about informed allocations of resources in an environment where both players and resources are subject to multiple risks.
So let’s talk about disinformation risk, and disinformation risk management. Misinformation, disinformation and related online harms are now endemic. There are hundreds of misinformation narratives around Covid19 alone, across multiple countries, subject areas, and demographics. In this environment, if you only have so many detection, analysis, and response resources, how do you decide where to use them?
That is a classic risk question. Disinformation defence has moved from long-form analysis of actors, and data science analysis of context, to data analysis of the relationships between objects, to tracking the movements and relationships between narratives. The next logical step from there was to start analysing disinformation as a risk problem, and specifically as a resource-limited risk problem (IMHO game theory is next, then a mix of standard and custom responses). And that starts to look a lot like classic infosec: managing threat surfaces, vulnerabilities, potential losses and outcomes from both disinformation and responses to it.
I’m out of writing time, so I’ll point at three organisations working in different ways on disinformation risk management: Alethea Group, GDI, and FiveBy. An internet search for risk+disinformation will find you a bunch more.