cognitive security: pyramid of pain

Sara-Jayne Terp
Disarming Disinformation
3 min readApr 29, 2021

I swear my best posts come from random conversations on the internet.

Pyramid of pain

The Pyramid of Pain describes the types of indicators you can use to make sense of / detect a cyber attack. In a curious moment, I asked it’s creator, David Bianco, if there was a disinformation version of it yet.

Erm… not yet? And he’s been thinking about it, and I have, so I scribbled down some tweets.

David’s question: “The Pyramid of Pain is about helping analysts & detection engineers make better choices wrt the types of IOCs they use for detection of security incidents. How would you describe an analog in the misinformation space?” sparked off:

I think it would be similar. We created the artifacts/ narratives/ incidents/ campaigns pyramid because the stuff at the bottom was easier to find, analyse and address than the stuff at the top, and we wanted people to raise their eyes up from “we found some stuff; remove stuff”…

Cognitive security pyramid of pain?
  • …at the bottom I’d see artifacts: known hashtags, phrases, images etc — things that we know are around in a lot of disinformation campaigns, hate speech, extremist posts etc. Keep a list, search for them, find some stuff…
  • Next up, e.g. IP addresses? We have a bunch of known sites, accounts, groups that push disinformation and conspiracies like antivax. It’s places that we know to go look in…
  • After that? names — but domain names, group names, page names, account names. Words that when you see them in a URL, are good flags that this is somewhere you might want to look. Slightly harder now: can check new addresses, but need to wait for content, and get false positives
  • Then ‘facts’. This is where you get into fact-checking, and need teams of humans doing background research, using image search etc to work out whether something new is real or not. Fact-checking seems more atomic than narratives; harder to discern
  • Disinformation host artifacts are interesting, and next. A lot of online activity creates traces (that’s why we created the left-hand-side of AMITT), but it can take either skills, resources, or direct access to find them. I think there’s some research to be done in this layer.
  • (BTW I’m struggling with where to put attribution artifacts in this pyramid. I suspect a lot of what’s needed would come from host artifacts plus background knowledge, and might be out of scope for IoCs)
  • Tools. I would put things like bots, cyborgs (human+bot account), C&C etc tools in here. And boy is it hard to detect cyborgs without doing network analysis etc these days. I’m trying to match the pyramid tops btw.
  • I’m keeping TTPs at the top, although I suspect there might be a couple more layers. And not just because I wrote the book on how to do this — https://github.com/cogsec-collaborative/AMITT — possible to automate some of this, but usually manual.

That was a coffeebreak’s worth of thought, but it might have some uses. One thing that’s glaringly missing from it are the human interactions that make up many disinformation campaigns — those attempts to manipulate groups, emotions etc as well as beliefs and algorithm rankings. There might be a whole different top of the pyramid for that, maybe even a whole other pyramid. But it’s a start of thinking about “what do we look for, what do we measure, how do we rank that by effort?”.

--

--

Disarming Disinformation
Disarming Disinformation

Published in Disarming Disinformation

A publication of the DISARM Foundation and DISARM Framework — an open standard for responding to the scourge of internet-scale disinformation, misinformation, influence operations and related areas