CogSec101 week 1: History

Cognitive Security from an information security point of view

I cover these things in the first week of cognitive security class:

  • History of cognitive security
  • Working definitions of information operations, disinformation, and cognitive security
  • Disinformation examples and common myths
  • Where to find more information
  • Potential risks to influence operations investigators, and mitigations for them.
  • Automation and AI

This is the history section.

Several problems, many communities

Misinformation, disinformation, malinformation, rumours, conspiracies: these are all part of the CogSec threat landscape, and have all been ‘found’, highlighted, studied, and countered/ mitigated by different communities.

  • The media community focussed on misinformation and disinformation. Clare Wardle’s “types of information disorder” diagram showed them as a venn diagram of falseness and intent to harm, where misinformation was falseness without intent, malinformation was intent without falseness, and disinformation was falseness and intent. This was very content-based, because a lot of the early focus was on not polluting media articles that had started to use User-Generated Content (after Web2.0, anyone could post anywhere, and using this content was an easy fix for media funding woes).
  • The military community focussed on psyops (renamed MISO: military information support operations). GAO’s diagram of the US Department of Defense showed this as part of information operations, alongside military deception, cyberspace operations, electromagnetic warfare, special technical operations, and operations security.
  • Targetted communities (technical women, Black Americans etc) focussed on surviving GamerGate-style personal attacks. They built backchannels and coping strategies long before many other communities noticed there was a problem. Shireen Mitchell’s Stop Online Violence Against Women group is a good example of these.
  • The information security community focussed on social engineering at scale. Our own diagram of information security being split into physical, cyber, and cognitive security is a nod to the many foundational information security texts that included human cognition, in various forms, from the start.

There are other communities in the cognitive security space, but these four drove a lot of early work.

Where the term Cognitive Security comes from

The definition of Cognitive Security that I use in class is: “Cognitive security is the application of information security principles, practices, and tools to misinformation, disinformation, and influence operations. It takes a socio-technical lens to high-volume, high-velocity, and high-variety forms of “something is wrong on the internet”. Cognitive security can be seen as a holistic view of disinformation from a security practitioner’s perspective”.

The term Cognitive Security comes from two different places:

  • MLsec: “Cognitive Security is the application of artificial intelligence technologies, modeled on human thought processes, to detect security threats.” — XTN. This is the MLsec definition, of machine learning in information security — in attack, defence, and attacking the machine learning systems themselves. This is adversarial AI, and Andrade2019 is a good summary of this field.
  • Social engineering: “Cognitive Security (COGSEC) refers to practices, methodologies, and efforts made to defend against social engineering attempts‒intentional and unintentional manipulations of and disruptions to cognition and sensemaking” — cogsec.org. This version of the term, coined by Rand Waltzman, is the social engineering at scale definition, about manipulating individual beliefs, sense of belonging etc, and manipulation of human communities. This could be seen as adversarial cognition, and Waltzman2017 and the COGSEC.org website created after his testimony are good summaries of it.

These definitions aren’t as incompatible as they look: they’re both based on adversarial activities, and defence against the manipulation of information, knowledge, and belief. But neither of them quite capture what’s going on today, where we’re seeing both humans and algorithms being manipulated to changes the fates of individuals, communities, organisations, and countries, although as I write this, I could see that the second definition could include algorithms if we allow cognition and sensemaking to cover algorithms too.

Both of these definitions are from the point of view of defence — something that was a strong driver of our (the CredCo MisinfosecWG) own adoption of a term that included “security”, but feels less appropriate when we’re modelling influence in information ecosystems, and what we’re looking at seems more and more to resemble massive multiplayer games, where each individual, community, organisation, country etc has its own goals, and may see even the most aggressive influence actions as part of defending its own realm. MLsec is helpful here, with its separation into study of attacks using ML (machine learning algorithms), defence using ML, and attacks on the ML processes themselves (Bruce Schneier’s paper on common knowledge attacks against democracy fits the latter part). It’s useful to be aware that your cognitive security defence moves might be viewed as someone else’s attack.

Whilst we’re bottoming this out, there are also two definitions of social engineering:

  • Centralised planning: “the use of centralized planning in an attempt to manage social change and regulate the future development and behavior of a society.” — basically mass manipulation
  • Individual deception: “the use of deception to manipulate individuals into divulging confidential or personal information that may be used for fraudulent purposes.” — basically phishing etc

Both of these are compatible with the definitions of cognitive security above. I think the definitions are vague enough to also cover something else that gets lost sometimes: that the entity being manipulated isn’t just knowledge (“truth” etc), but also include manipulation of group cohesions (“belonging”) and emotions (“feels”), both of which can be changed with information that’s completely true.

The centralised planning definition is interesting as we shift from responding to disinformation incidents one-by-one, to discussing how to improve our information environments (e.g. by making verified information easier to find online), and hopefully creating resilience at all levels rather than mandating it from above. In spaces where many entities are competing for attention and influence, viewpoints matter, autonomy and individuals matter, and resilience and vulnerability are most likely to stem first at the individual and community level.

Things we’ve borrowed from Information Security

This gets us to where we at the moment with Cognitive Security.

One way of looking at Cognitive Security is as a parallel effort to cyber security, but with brains, beliefs, and communities substituted for computers, data, and networks. Although there are differences between these domains that are pointed out throughout the course, this analogy has served us well in finding cybersecurity ideas that might help with things like disinformation defence. Despite these, we’re still dealing with two domains that are carried on the Internet, and usually result in actions.

One early idea borrows from the CIA triad of confidentiality (only the people/systems that are supposed to have the information do so), integrity (the information has not been tampered with), and availability (people can use the system as intended). Danny Rogers first pointed out that disinformation is an integrity problem, where beliefs, belonging etc have been tampered with.

Another useful idea is adapting the ATT&CK framework to model and manage disinformation creator and responder behaviours (aka TTPs, or Tactics, Techniques, and Procedures). This became the AMITT set of disinformation behaviour models.

The STIX model of information security actors, behaviours, content, tools, indicators, vulnerabilities, and infrastructure also adapted easily to disinformation use, with only two minor changes (adding a narrative object that mirrored the use of malware objects, and an incident object that behaved similarly to an intrusion set).

Viewing disinformation as a risk management problem has been extremely useful, allowing us to do similar analyses to those seen in other parts of information security risk management: quantifying and assessing risks (how bad, how big, who to), and components including attack surfaces, vulnerabilities, potential losses and outcomes. This allows for risk assessment, reduction, and remediation, but more importantly, in an era when misinformation, disinformation, and rumours are everywhere, helps us answer the question of where to put detection, mitigation, and response resources, where resources include people, technologies, time, attention, and connections. As far as I know there isn’t a disinformation version of the FAIR risk management framework yet, but it’s not a big adaptation (a few category changes), so that’s probably only a matter of time too.

Another thing borrowed is the idea of tiered security operations centers: their structure, activities, resources, and principal objects. These have proved themselves useful in the past year.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store