Creating a Cognitive Security course

top of the CogSec course syllabus

I’ve been a little quiet of late. To be honest, in just the last year, I’ve done a bunch of stuff including designing a pilot system (including how to do disinformation assessments) for the UNDP to use in elections around the world, setting up WHO Europe’s infodemic response, and moving back to the United States to teach courses at the University of Maryland that include cybersecurity risk management, tech innovations, cognitive security, ethical hacking and social robotics (that’s just this year. the iSchool also does a bunch of data science, HCI, and other information work that I’d like to get stuck into also).

Several of these courses are new. The one I got most excited about was finally teaching the “Cognitive Security: Defence against Disinformation” course that catching Covid got in the way of teaching at Western Washington University (the one with all the trees, mountains, sailing, cutesy artsy town etc etc).

I promised a bunch of people I worked with that if I did get a university course going, I’d put something online for them. So this is the start of that. There’s also a cutdown version of this course — a one-day 7x 1-hour version that Pablo and I trialled at the FBI — so what goes here will be a mixture of the two.

As always, we start from the goals. Here’s the simple version of what I’d like students to be able to do at the end of the course:

  • Evaluate persuasive technology at different scales
  • Evaluate influence operation mechanisms and tracking techniques
  • Use tools to investigate account and network-level coordinated inauthentic activities
  • Understand ethical behaviour around misinformation and disinformation response and research

Notice how I don’t say “disinformation” very much? That’s deliberate. I’m trying to build skills that cover a range of cognitive security activities, from rumour tracking to responding to full-blown influence operations. To do that, I both need to cover a wide range of skills, but I also need to be writing up and summarising all the work I and the teams I work with have done in the area over the past decade. That’s going to be a mix of OSINT, data science, infosec, etc — the week-by-week structure (2 hours a week) that covers that is:

  • 2. Disinformation reports. Ethics. Researcher risks.
  • 3. Cognitive security fundamentals. Cognitive security risks.
  • 4. Human system vulnerabilities and patches. Psychology of influence.
  • 5. Frameworks for understanding cognitive security. Relational frameworks
  • 6. Building landscapes
  • 7. Setting up an investigation
  • 8 Misinformation data analysis
  • 9 Disinformation data analysis
  • 10 Disinformation responses
  • 11 Monitoring and evaluation
  • 12 Games, red teaming and simulations
  • 13 Cognitive security red teaming
  • 14 Future possibilities
  • 15 Project presentations

Looks a bit sparse, but the OSINT, data science etc run all the way through these classes. Also running through all the classes are the ideas of respecting everyone because we’re dealing with people here, that the field is changing rapidly but underlying principles aren’t, and that all the definitions of cognitive security and social engineering are in play.

The short version of the class is… well, shorter, and looks like this:

  • 1: history and mechanisms
  • 2: frameworks and risk
  • 3: landscapes and response
  • 4: planning and context analysis
  • 5: response and content analysis
  • 6: simulation and washup

Practice is important. This comes in three forms: practice in the classroom and/or short assignments (like last week’s live exercise summarising a country’s influence ecosystem), one-off case studies, and a multi-week project that can hopefully serve as a intro to some of the organisations in this field. The case studies for the uni course look like this:

  • case study 2: build a disinformation landscape assessment for a country, business, or vertical
  • case study 3: gather datasets related to an existing disinformation narrative, and package them as an alert or report to be sent to a disinformation response group.
  • group project: Apply cognitive security techniques to an incident, country, business, or community. Identify a problem, use the lens of one of the frameworks we studied in class to address it, and explore/analyze *theoretically* how to successfully measure and counter it.

That’s the outline — there’s a lot of detail under each heading (including copious numbers of subheadings), but that’s for later posts. Issues in creating the course included deciding which level of detail to pitch it at — e.g. students go into things like network analysis in python, but that’s really just an entry point to all the big data work needed to do this over time, and each of the top-level topics has a similar set of decisions to make. Another issue is in tidying up recent work to make it explicable at university level. And references have been hard — losing Threet’s tagged and grouped references database last year is still having an effect. But it’s worthwhile work, and I think — I hope — useful.



A publication of the DISARM Foundation and DISARM Framework — an open standard for responding to the scourge of internet-scale disinformation, misinformation, influence operations and related areas

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store