Photo by sebastiaan stam on Unsplash

Propaganda as Adversarial Attack

Gustavo
Rally Point Journal
7 min readJun 28, 2018

--

At the intersection of neuroscience and computer science the metaphor of mind as machine is in full swing.

On the one hand we are conceiving of our mind as algorithms and on the other we are building intelligent machines based on the way the mind seems to work.

As part of the understanding of how to build intelligent machines, and protecting themselves from those machines, people are developing ways of tricking algorithms into categorisation errors. The process involved in this deception is known as an adversarial attack, and it may tell us something about how propaganda works on us.

I predict a riot

The mind is a prediction machine, and so as we try to make our machines more mind-like we are working on their ability to predict. The mind builds models of the world to improve our predictive abilities and reduce the energetic cost of sensing everything. Our awareness at any given time is based mainly on these internal models rather than on what is “streaming” from the outside world. As David Eagleman notes:

“What we see in the world is what we think we’re seeing out there. Most of vision is an internal process happening completely within your brain and the information dribbling into your retinas is just a small part of what you’re actually perceiving. About 5% of the information of your visual stream is coming through your retinas — the rest is all internally generated given your expectations about the world.” (26:40)

One explanation for the LED incapacitator riot control weapon could be that because it pulses randomly shifting pulses at victims, they have to exert too much energy on actually deciphering what is in front of them, as their mental models are unable to provide useful predictions.

Surprise!

Another way to explain the role of the mind is that the mind is a surprise management machine.

The mechanisms of the mind operate in a way to reduce surprise while remaining alert to it. Surprise is an energy-intensive mental state but also a good guide to both danger and the need to update mental models. It is important that we do not put unnecessary energy into regular but non-threatening occurrences, but that we are alive to threatening occurrences with whatever regularity they occur.

This is not a Panda

One of the fields of AI that is rapidly developing is visual recognition. It has already been deployed in everything from driverless cars to tracking criminals through facial recognition. Several of the ways in which AIs work, including “state-of-the-art neural networks,” can be tricked into misclassifying an image that is only slightly different to an image that it would correctly classify.

In the example below, the machine correctly classified the the picture on the left as a panda. The system was then shown the same image with the added “noise” shown in the middle picture, such that the first two images combined still resulted in a panda-looking image — that on the far right. I cannot tell the difference between the image on the left and the one on the right and it seems straightforward to say they are both images of pandas.

However, that imperceptible difference was enough to make the machine confidently misclassify the picture as a gibbon.

Adversarial attack example from Explaining and Harnessing Adversarial Examples

Making errors like this in the lab is one thing, but this vulnerability of machine learning has real-life implications. Activists can take advantage of it to fool facial recognition technology. With the simple addition of black-and-white art stickers, a driverless car (below) can be fooled into thinking that a stop sign is actually indicating a 45mph speed limit.

In these cases, in order for the attack to work, the machine has to fail to detect that it is receiving a novel input, the first stage in identifying surprise.

We like to think that we would not fall for such simple tricks, that our human intelligence somehow frees us from this.

Nothin’ proper about ya propaganda

Despite the arguments over whether the word intelligence can be applied to computers, our intelligence, both individual and collective, is far from immune to adversarial attacks.

In machines, the adversarial attack leads to misclassification of something, the inability to see it for what it really is.

When humans are subject to adversarial attacks in the form of propaganda, they lose the ability to see something for what it really is. This is because propaganda distorts the image in a way that makes a correct identification hard.

Our personal psychology is the element of the machine that propaganda exploits, claims Jacques Ellul. Faced with a situation we do not understand and inhabiting a world of decisions made beyond our comprehension or ability to influence, we seek consolation from that despair, and the simple messages of propaganda provide comfort.

On the blog post Distributed Mimetics, Julien Delacroix traces Ellul’s work on propaganda and highlights that at its core, propaganda is:

a pervasive tool of social control that is rooted in individual psychology... The individual psychology is the desire to be included in a group. The emotional stakes of this inclusion is substituted in for whatever substantive issue would otherwise be up for reasoning and decision.

Propaganda disrupts the role of communication as a method for relaying or discovering information. Instead, propaganda turns instances of communication from opportunities to establish the reality of the world into opportunities to establish in-group identity.

The adversarial technique at play is turning an information channel into an inclusion/identity channel. The result is that instead of recognising the information for what it is and identifying it accurately, a participant in propaganda can’t see the information for what it is and instead responds as is expected to achieve inclusion.

Propaganda, notes Delacroix, “reframes the stakes of the issue as inclusion or exclusion — regardless of whether this framing is really relevant to the problem at hand and often despite it clearly being irrelevant — and it provides clear answers to the question of what will lead to inclusion.”

Fools follow rules

Jason Stanley, author of How Propaganda Works, agrees that propaganda is not simply malicious or biased communication. It can work for good or bad causes, but it is a part of a mechanism “by which people become deceived about how best to realize their goals.”

Stanley identifies another adversarial mechanism: the stereotype, or “social script.” The social script is a part of a mental model that is so strong that novel inputs are not recognised, and are instead reduced to a previous category. This is an opposite example to the machine that sees a panda and identifies it as a gibbon. In this case, the human ignores the deviation from their existing model and carries on as if their previous model was accurate.

As an example of social scripts, he notes in an interview with Roxanne Coady that “Every German raised under National Socialism has only three pictures in their head when they hear the word ‘hero’: a racecar driver, a panzer truck driver and a storm trooper” (22:12) and that in the US there have been attempts to create a strong a link between immigration and crime or blackness and violence.

Social Script: “Mexicans are rapists and animals”

Input: Mexican doing landscaping

Output: “You’re a rapist and an animal”

Said it was blue, when ya blood was red

These techniques are doubly adversarial.

Firstly, they prevent the person who is either lured into a tribal channel or has a social script running from seeing the world the way it actually is.

Secondly, because the participant in the propaganda is unaware that they are interacting in the tribal channel or are not aware that the social script distorted their experience of reality, they do not update their mental model.

Propaganda disrupts the operations of mind as a surprise management machine. This machine needs to reduce the attention/energy it gives non-threatening novel inputs, and needs to improve its ability to detect threatening novel inputs.

Propaganda does the opposite, particularly what Stanley labels authoritarian propaganda. It heightens the fear and outrage about others whose threat is disproportionately inflated, and so they become a huge attention/energy drain.

At the same time, it dulls the surprise response to incidents which should generally be alarming, which merit closer inspection, and should update our mental models accordingly. Our mental models are doing the heavy lifting of perceiving and sensemaking.

Manage Surprise

Given that we are subject to, and participants in, propaganda:

What can we do to ensure that our mental model is the best it could be? That it best tracks reality?

How do we avoid wasting energy/attention on inputs that are neither as surprising nor as threatening as they seem?

How do we avoid dulling the surprise of threatening inputs and failing to update our models?

This is a call for audience participation.

One answer is to pay attention closely to our responses. Let the idea that you are “training” a model in your brain seep in.

Observe your interactions: Did you communicate from the perspective of seeking inclusion, or from the perspective of trying to see things the way they really are? (Don’t be ashamed of seeking belonging — that’s what the game is ultimately about — just be aware of which motivation is guiding your thinking.)

Feel suddenly surprised/outraged? Assess whether the message genuinely merits it and what the source was.

Feel blasé and find yourself thinking “it’s just the way it is”? Look further.

Be curious, read across a range of perspectives. Seek the kind of surprise that updates your mental models.

Which social scripts are you running? Which people make you think “they’re all the same” — Immigrants? Experts? Politicians? Gun owners?

Be the gardener of your mind.

Inspiration for this article came from the Distributed Memetics post and my two days at CognitionX, where Alex Kaula introduced me to the notion of surprise avoidance.

--

--