Enter the Grey Zone

We often mistake Hollywood’s dramatic interpretation of cyberwarfare for reality.In fact, it is prosaic, pervasive and already shaping humanity’s future.

Words: Sophia Epstein
Illustration: Mike Lemanski

We’ve all consumed enough hours of Hollywood blockbusters for the expression “Cyberwar!” to trigger images of bright green code being hijacked by an evil mastermind intent on unleashing a world-ending disaster. But cyberwar exists in the world outside of movie studios and, although insidious, is far less dramatic.

“What people often get wrong about cyber operations is that they think that nations hacking each other is some kind of exceptional or extraordinary circumstance, like the use of a nuclear weapon,” says Ben Buchanan, senior fellow at Georgetown University’s Center for Security and Emerging Technology. But that’s not the case at all. “It’s an ordinary, daily circumstance,” he says. “Nations hack each other all the time.” It’s a constant struggle for advantage.

Typically, cyber-attacks are carried out for one of three reasons: to gather information, to spread misinformation, or to destroy information altogether. The consequences are varied: citizens are turned against governments; elections are rigged; entire industries get completely shut down — but countries don’t retaliate in the way you’d expect.

Cyberwarfare doesn’t play by the same rules we associate with conventional war. “There’s this grey zone between war and peace,” says Buchanan. “It’s not a neatly demarcated, ‘now it’s war time, now it’s peace time’, scenario.” To date, cyber tactics have been problematic and costly in a multitude of ways, but none has yet resulted in an actual declaration of war.

The lack of public retaliation between states means the grey zone keeps getting bigger and nations can get away with more and increasingly varied cyber tactics. “Looking back over the last 20 years, cyber-attacks are doing more damage and cyber-espionage is much more common than it was 20 years ago,” says Buchanan. “It’s pretty clear that the grey zone of cyber operations in which nations feel comfortable operating is continuing to expand and only becoming more aggressive.”

While cyber tactics may exist in a legal grey zone, they are all techniques typically associated with war: espionage, propaganda and strategic attacks. Espionage is as old as the nation state itself, but in the digital age, what spies can do with the information they gather is far more severe. As Buchanan wrote in his book The Hacker and the State: “Digital information is much more portable than paper.” Ergo, it’s easier to steal. “Before Daniel Ellsberg, the Vietnam War whistleblower, could leak the Pentagon Papers, he and his family and friends spent more than a year photocopying their seven thousand pages, one at a time. By contrast, once computer hackers gain access, they can often vacuum up and freely move similar volumes of information in hours, if not minutes.”

That could mean stealing and replicating US military weapons designs, as China is suspected to have done, or building up a wealth of information about the citizens of a hostile state and using that information to shape the way they think and behave. The more information available about an opponent, the bigger the advantage. This is how propaganda is used.

In 1993, two analysts from the think tank RAND, published a paper called Cyberwar is Coming! In it, they described something they called “netwar”: “It means trying to disrupt, damage, or modify what a target population ‘knows’ or thinks it knows about itself and the world around it,” they wrote. That’s exactly what’s happening today.

“The grey zone of cyber operations in which nations feel comfortable operating is continuing to expand and only becoming more aggressive.”

In the same way that Stalin erased Trotsky from Russian history, cyber tactics allow administrations to spread misinformation. This new wave of “computational propaganda”, as digital misinformation expert Samuel Woolley dubs it in his book The Reality Game, is a type of “information warfare”, and it can be used to cause real-world damage. Bots were used by the military in Myanmar to start and fuel rumours about the Muslim Rohingya people, ultimately contributing to tens of thousands of murders. Chinese Twitter bots were used to disseminate lies about the Hong Kong protestors and discourage public support for their cause. Hatred for refugees in Europe was drummed up by far-right bots sharing fake news stories about women being attacked by asylum seekers.

“Bots, trolls, and sockpuppets can invent new ‘facts’ out of thin air; homophily and confirmation bias ensure that at least a few people will believe them,” writes political scientist PW Singer in LikeWar. “On its own, this is grim enough, leading to a polarised society and a culture of mistrust. But clever groups and governments can twist this phenomenon to their own ends, using virality and perception to drag their goals closer within reach. Call it disinformation or simple psychological manipulation. The result is the same, summarised best by the tagline of the notorious conspiracy website Infowars: ‘There’s a war on … for your mind!’”

On their own, bots can get a news story trending, or legitimise an extreme movement with their phoney support, but they’re not the only tool being used for manipulation. A new breed of hyper-personalised, hyper-targeted ads, gives anyone — from advertisers to governments — a way to make sure that a specific group of people see a specific message that has been structured in a way that appeals specifically to them.

“The problem is micro targeting. The problem is being able to learn enough about people to target ads that are psychologically manipulative,” says security expert Bruce Schneier. “There’s a loss of autonomy and
a loss of free agency.”

Psychological manipulation in advertising is nothing new, but the ubiquity of behavioural economics in our daily lives, is. Though not intrinsically sinister, the discipline outlines simple switches that public and private bodies can make to nudge citizens and consumers into making certain decisions. The godfather of nudge theory, Richard Thaler, is clear that people will usually engage in behaviours that are easiest for them; placing fruit in a more prominent location than chocolate in a cafeteria, for example, can nudge kids to eat more healthily.

These nudges may be innocuous when they’re encouraging someone to change their diet or buy a subscription to a magazine, but the stakes are raised when subjects are nudged towards behaviours that have wider consequences. But, as Richard Shotton writes in The Choice Factory: “Nudges aren’t magic. They don’t transform the behaviour of every person, every time. The clue is in the name: they’re nudges, not shoves.”

Shoves are made possible by social media. Social networks, like Facebook, have access to a whole new world of personal information. As Anthony Nadler, Matthew Crain, and Joan Donovan write in their Data & Society report, Weaponizing the Digital Influence Machine: “With mass consumer surveillance, political advertisers can maximize the potential influence of their nudges by sifting through data to identify who is most likely to be influenced, what kinds of nudges or triggers they may be most affected by, or even factors like at what moments or in what moods a target may be most receptive.”

“What people often get wrong about cyber operations is that they think that nations hacking each other is some kind of exceptional circumstance. Nations hack each other all the time.”

The more someone knows about an individual, the more they know how to press their buttons, and the people who own your digital data know an awful lot about you.“The fundamental purpose of most people at Facebook working on data is to influence and alter people’s moods and behaviour,” wrote ex-Facebook software engineer, Andrew Ledvina. “They are doing it all the time to make you like stories more, to click on more ads, to spend more time on the site.”

This is problematic in many ways. For one, says Schneier, “in our minds, advertising means showing people the advantageous features of a product, but it’s not that anymore, it’s psychological manipulation.” Mix psychological manipulation with political propaganda and you have a problem.

Social media platforms mine all the data necessary for intelligence operatives to engage in espionage and then hand it to them on a platter. Data and influence can simply be bought, and thanks to the anonymity of dark money, no one can track who is doing the purchasing. This is the scenario that played out in the 2016 US general election. As well as hacking into voter databases and Clinton campaigners’ email accounts, the Russian Internet Research Agency created groups and bought targeted ads on Facebook, which, according to research done by Young Mie Kim, a journalism professor at the University of Wisconsin-Madison, focused on minority groups — typically Democrat voters.

In her research, Kim tracked groups with names like “Black Matters” and “Blacktivist”. For months these groups built up a following with empowering messages: “black is beautiful” and “don’t be afraid to rock your natural hair”. Then, a few months before the election, their messages shifted
to “Boycott the election” and “Hillary doesn’t deserve black votes”. “This is not abnormal stuff,” says Schneier. “If Kellogg’s did that to sell cereal, they would have got a marketing award.”

Cambridge Analytica, which did win an advertising award for its work in the 2016 US election, wasn’t hired by a foreign power, but it too utilised Facebook’s data-fuelled, hyper-targeted ads to target swing voters. Whistleblower Christopher Wylie called the tactics “information warfare” and “not conducive to democracy.”

These messages will only become more manipulative as technology evolves. As smart sensors and wearable tech become increasingly ubiquitous, we will be monitored in even greater detail, and more data means more accurate predictions about our behaviours. Shoshana Zuboff, Harvard Business School professor and privacy expert, interviewed several social media engineers for her book The Age of Surveillance Capitalism. One of them told her: “Connected smart sensors can register and analyse any kind of behaviour and then actually figure out how to change it.” Technology is already shoving us in all sorts of directions.

Misinformation messages will also become more convincing. Deepfakes are already being used to make politicians say things that they haven’t. These AI-doctored videos are only getting more sophisticated. It’s not a common occurrence right now, “but the era of smart technology will be upon us soon,” Woolley writes. “These deceptive campaigns will grow more powerful, just as email scams have graduated from free-associative spam messages … to sophisticated phishing attacks.”

“If you can ‘win’ the internet, you can win silly feuds, elections, and deadly serious battles alike. You can even warp how people see themselves and the world around them.”

Ideally, social media companies would be tackling these problems head on. But the technology is evolving at speed and, realistically, they don’t have enough motivation. Protecting democracy comes at huge financial cost.

Facebook was fined $5bn by the Federal Trade Commission for the privacy violations of the Cambridge Analytica scandal — pocket change when you consider the company made almost $70bn in advertising revenue in the same year. What did cause the company serious problems was its hiring of 20,000 new content moderators in 2018, an act that wiped $100bn off its market value. Facebook’s share price has since recovered, but its data practices are still problematic, as evidenced by a recent boycott by some of its largest advertisers.

In 2016, these cyber tactics were used to alter the course of an election, but that’s only because that was the goal at the time. There are plenty more openings for manipulation. As Singer writes: “If you can ‘win’ the internet, you can win silly feuds, elections, and deadly serious battles alike. You can even warp how people see themselves and the world around them.” With stakes this high, should we really leave it to the CEOs of tech giants to decide our fates?

This is article is from Weapons of Reason’s eighth issue: Conflict.
Weapons of Reason is a publishing project by Human After All, to understand and articulate the global challenges shaping our world.

--

--

Weapons of Reason
The Conflict issue — Weapons of Reason

A publishing project by @HumanAfterAllStudio to understand & articulate the global challenges shaping our world. Find out more weaponsofreason.com