Disinformation as a security problem: why now, and how might it play out?

Sara-Jayne Terp
Oct 15, 2019 · 4 min read

When I talk about security going back to thinking about the combination of physical, cyber and cognitive, people sometimes ask me why now? Why, apart from the obvious weekly flurries of misinformation incidents, are we talking about cognitive security now?

Big, Fast, Weird

I usually answer with the three Vs of big data: volume, velocity, variety (the fourth V, veracity, is kinda the point of disinformation, so we’re leaving it out of this discussion).

  • The internet has a lot of text data floating around it, but its variety isn’t just in all the different platforms and data formats needed to scrape or inject into it — it’s also in the types of information being carried. We’re way past the Internet 1.0 days of someone posting the sports scores online and a bunch of hackers lurking on bulletin boards: now everyone and their grandmother is here, and the (sniffable, actionable and adjustable) data flows include emotions, relationships, group sentiment (anyone thinking about market sentiment should be at least a little worried by now) and group cohesion markers.
  • There’s a lot of it — volumes are high enough that brands and data scientists can spend their days doing social media analysis, looking at cliques, message spread, adaption and reach.
  • And it’s coming in fast: so fast that an incident manager can do AB-testing on humans in real time, adapting messages and other parts of each incident to fit the environment and head towards incident goals faster, more efficiently etc. Ideally that adaptation is much faster than any response, which fits the classic definition of “getting inside the other guy’s OODA loop”.

NB The internet isn’t the only system carrying these things: we still have traditional media like radio, television and newspapers, but they’re each increasingly part of these larger connected systems.

So what next?

Another question I get a lot is “so what happens next”. Usually I answer that one by pointing people at two books: The Cuckoo’s Egg and Walking Wounded — both excellent books about the evolution of the cybersecurity industry (and not just because great friends feature in them), and say we’re at the start of The Cuckoo’s Egg, where Stoll starts noticing there’s a problem in the systems and tracking the hackers through them.

I think we’re getting a bit further through that book now. I live in America. Someone sees a threat here, someone else makes a market out of it. Cuddle-an-alligator — tick. Scorpion lollipops in the supermarket — yep. Disinformation as a service / disinformation response as a service — also in the works, as predicted for a few years now. Disinformation response is a market, but it’s one with several layers to it, just as the existing cybersecurity market has specialists and sizes and layers.

Markets: sometimes botany, sometimes agile

Frank is a very wise, very experienced friend (see books above), who calls our work on AMITT “botany” — building catalogs of techniques and counters slower than the badguys can maraud across our networks, when we really should be out there chasing them. He’s right. Kinda.

I read Adam Shostack’s slides on threat modelling in 2019 today. He talks about the difference between “waterfall” (STRIDE, kill chain etc) and “agile” threat modelling. I’ve worked on both: on big critical systems that used waterfall/“V” methods because you don’t really get to continuously rebuild an aircraft or ship design, and on agile systems that we trialled with and adapted to end-user needs. (I’ve also worked on lean production, where classically speaking, agile is where you know the problemspace and are iterating over solutions, and lean is iterations on both the problem and solution spaces. This will become important later). This is one of the splits: we’ll still need the slower, deliberative work that gives labels and lists defences and counters for common threats (the “phishing” etc equivalents of cognitive security), but we also need that rapid response to things previously unseen that keeps white-hat hackers glued to their screens for hours, and there’s a growing market too in tools to support them. (as an aside, I’m part of a new company, and this agile/ waterfall split finally gives me a word to describe “that stuff over there that we do on the fly”).

Also because I’m old I can remember when universities had no clue where to put their computer science group — it was sometimes in the physics department, sometimes engineering, or maths, or somewhere wierder still; later on, nobody quite knew where to put data scientists as they cut across disciplines and used techniques from wherever made sense, from art to hardcore stats. This market will shake out that way too. Some of the tools, uses and companies will end up as part of day-to-day infosec. Others will be market-specific (Media and adtech are already heading that way); others again will meet specific needs on the “influence chain”, like educational tools and narrative trackers. Perhaps a good next post would be an emerging-market analysis?

misinfosec

work on the boundary between misinformation and information security

Sara-Jayne Terp

Written by

Social data nerd.

misinfosec

work on the boundary between misinformation and information security

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade