Disinformation SOC

Sara-Jayne Terp
Disarming Disinformation
5 min readOct 11, 2020

[co-written with Pablo Breuer]

Yesterday talking to someone, mentioned the Disinformation SOC, and they went “yeah, those things”. So it’s mainstream now, and we’re well overdue talking about it (and what we’ve done to run distributed disinformation SOCs).

Questions. When does a group of people tracking disinformation turn into a SOC? What is a SOC? Do I need a SOC? The answer depends on who you are — if you’re a government, it makes sense; if you’re an organisation, it may or may not make sense.

What is a SOC? It’s a Security Operations Center. In summary, it ingests indicators, conducts and disseminates analyses, and responds to incidents. It can be one person and a dog — but the dog’s probably going to get tired (as will the person if they’re on 24-hour callout).

Let’s talk about disinformation, not misinformation here, because with disinformation you get malice and intent. You’re actively tracking a person/ persons doing a thing, rather than actively tracking a juicy rumour. You’re trying to, but that person may or may not be there; sometimes indicators won’t be incidents.

A disinformation SOC could be a full SOC, or could be a desk within an existing SOC.

Looking at a SOC to handle a large volume of disinformation, and monitoring continuously, what would it look like?

SOCs monitor systems — in this case, those systems aren’t in a server rack, they’re human and social systems. They’re monitoring for indicators of an incident. They also get alerts in from other teams. Companies have their own internal social structures, but in most cases, the systems that they’re monitoring for disinformation aren’t their own — the carriers are facebook, twitter, etc; the social structures are outside the organisation, and often outside the digital systems too. This is part of the twin problem we have — we’re all part of a digital social system, and a real social system, and they don’t always overlap, or separate nicely.

If this is threat reduction, what are you monitoring for? You’re monitoring for indications that something is wrong with the social system, but that risks being the size of “something is wrong on the internet”. If you’re an organisation, you might be looking for brand risk, direct attacks on staff, and anything adjacent that might affect your operations. If you’re a government, you’re typically looking for things that affect the population of your country, which also risks being a “something is wrong on the internet” sized problem. If you’re a social media company, what are you looking for? Are you looking for disinformation behaviours in general (because that means malice of intent)?

Each of our teams has bounded the area that it’s interested in (e.g. Covid, domestic terrorism etc), because without that you go a little crazy and try to address all the things, and/or get sucked into nazi hunting (this is maybe a corollory to Godwin’s Law: any disinformation team will at some point devolve into looking for violent extremists).

One thing about modern misinformation and disinformation is there’s a lot of it. You’ll see a lot more indications of misinformation than disinformation — many “useful idiots” replicating messaging, often because it aligns with their views, sometimes because it’s fun, or it’s interestingly wrong. It’s complicated: we’re not even convinced that people amplifying disinformation think it’s all true — they’re amplifying as group membership signals, to enrage out-groups, or for the LOLs.

You can’t fully automate disinformation defence (trust us, we tried) — you need people to recognise sarcasm, subtlety, localised phrasing, language around emerging events. You can’t fully automate detection. Turing machines are incapable of identifying editorials and notoriously bad at detecting satire. What a SOC can do will scale with the number of people it has available for triage and analysis. Machines can take some of the load on this — there are some good tools out there already, and some interesting machine learning approaches, especially in graph analysis and text-based clustering. But the strength of any SOC is in training and deploying its people wisely.

Big question: do disinformation SOCs have tiers? Does your organisation need multiple tiers? That depends on the amount of data you need to ingest, the danger, whether the SOC is there to identify incidents and pass that information on, or there to help mitigate those incidents. Many ISACs can’t mitigate — they’re there to inform, and their members can mitigate their own portions of each incident. There’s just not necessarily mitigation in the SOC or originating organisation — you might not have the capability or authority to do mitigation.

Even if you’re identifying an incident and passing on information, there’s analysis going on. Some existing SOCs have multiple analysis tiers. The reasons there were multiple tiers:

  • 1) not enough qualified people to deal with all incidents — had baseline people to say “I think there’s something there” and pass that on to someone more qualified, or deal with lower-level threats, and pass anything more on to a higher tier;
  • 2) too much information to sift through for your high-level analysts, so created a lower-level pool of analysts to sift through before handing over;
  • 3) low-level tiers were internal coordination only; higher-level tiers coordinated with external bodies (didn’t want brand new hires talking to the corporate office)

This applies in disinformation too, but depends on the risk and volume of disinformation versus the size of your trained team. There has to be a cost-benefit on this: there might be large volumes, but if the risk is low, you’ll probably not invest; high-risk, small volume, you might invest; most of the time it’s between these, and a balance.

We used two tiers in CTI League’s Disinfo team, although we didn’t relate that back to SOC tiers at the time — it was a practice we carried over from crisismapping; of having a large general team monitoring, searching and triaging incident artefacts as they came in, and a smaller dedicated team working as tier2. As workload and requirements go up, organisational theory goes up, and you (hopefully) find more efficient ways to do the work. As with other SOCs, there are a lot of other people things to talk about, for later.

Reading

--

--