Information wars: What to do when your mum wants to send you diamonds

Evan Thomas
The Zip Files
Published in
7 min readMay 15, 2018

Quiz question: What’s the difference between misinformation and disinformation? *Countdown timer*.

I was on holiday in Cornwall, the bit at the bottom of England that sticks off into the sea. It was not so long ago, maybe 5 years now. The sun was shining, or at least it does in my memory, and I was full from a well-earned lunch. Well earned by virtue of having vaguely strolled along the beach that morning. A few of us were sitting outside, chatting about everything and nothing. And then suddenly I’m being told that I eat hundreds of spiders a year.

Just to clarify, I am not a voluntary insect eater, and so you can imagine my shock. “I do not” I probably retorted. My friend was adamant. “You do” she probably affirmed. I wasn’t sold. I went away, had a google, and saw that there was a common myth that sleeping humans accidentally chow down on 8 spiders a year. My friend had taken this myth to a new order of magnitude. But she didn’t intend to deceive. This was a case of misinformation.

My younger brother used to love all things sweet. I used to enjoy winding him up. So it was his birthday, maybe like his 4th or 5th, and in classic style he had candles on his birthday cake. I saw an opportunity for a gag. “Alright Daniel, well done on blowing those edible candles out. Why don’t you have the first bite?”. So the poor boy ate the biggest of the certainly not edible candles. This was a case of disinformation. I had intentionally deceived.

Information is a powerful thing and many think that its manipulation is now more threatening than armies or bombs. In 2016 Russia led a state-sponsored campaign on social media to influence the direction of the US presidential elections. Something that nobody had the foresight to stop. Since then its become increasingly obvious that Big Tech has unwittingly created a vehicle for World War III. Only this time it isn’t being fought on the fields of Flanders, but the feeds of Facebook.

The war is in its nascent stages, it’s the 21st century’s 1939. Artificial intelligence and an ever more connected world will soon bring us to a bitterly worse reality than we might expect. A reality which is indistinguishably blended with falsehoods. Information attacks have become a serious threat in a short space of time. Back in 2014 the World Economic Forum, placed the “spread of misinformation online” as the 10th most significant trend to watch that year. Then less than 2 years later, it was directly affecting American democracy through vehicles like the now shamed and bankrupt Cambridge Analytica. By the way, it’s not just pesky Putin and the world’s autocrats that use information to alter hearts and minds. A study at Oxford University found that since 2010, 28 countries have participated. Of those, the USA has the most cyber-troops waging state-sponsored cross-border information wars.

The thing is that we barely know what to do in the face of these geo-political digital information attacks. There is no Geneva Convention or UN treaty to define the scope and punishment of cyber meddling. Indeed the World’s leaders haven’t done all that much except turn to Big Tech and say “You’re very silly, aren’t you? Can you stop being silly — Oh and by the way what’s the internet again? My granddaughter likes it”.

What can we expect the future of the global information war to look like? Well, as identified by Aviv Ovadya, Chief Technology Officer at the University of Michigan’s Center for Social Media Responsibility, it will be waged on three fronts.

  1. Diplomatic and reputational manipulation
  2. Automated laser phishing
  3. Computational propaganda

Okay, there were some big words there. Let’s dive into what each of those three militarised fronts will look like.

Diplomatic and reputational manipulation is the creation of disinformation to influence geopolitical decisions or attack a person’s reputation. It is said that a picture is worth a thousand words and that video is then worth a million. If you see a video of your friend doing a backflip into your estranged uncle’s paddling pool, your initial reaction would probably be: “Damn, nice backflip”, and “Wow my uncle’s got a paddling pool”. Not, “Wait a second this can’t be real”. We believe video. Video is hard and expensive to believably fake. Or at least it has been traditionally.

Researchers at the University of Washington have recently used AI to create videos of Obama saying things that he never actually said. I’ve watched the videos, and they are terrifyingly believable. In fact, I would never have questioned their authenticity had I not known they were fake.

University of Washington synthesizing Obama

It doesn’t take a big brain then to realise that fake videos loom over the information war like an atom bomb.

And then there’s our second front: Automated laser phishing. Don’t worry it’s not Arnold Schwarzenegger firing light beams at shoals of Haddock. Actually do worry, it’s much less funny. Laser phishing is when AI impersonates someone that you know in order get you to do something that you wouldn’t otherwise do. Like, for example, giving it your credit card information — or, if you’re a big dog — leaking state secrets. I know to ignore a Nigerian prince who pops up in my email telling me I’ve won loads of diamonds. On the other hand, if a fake account believably purporting to be my mum told me I’d won loads of diamonds, I’d probably email her anything she wanted. Let’s hear from Aviv Ovadya:

“Alarmism can be good — you should be alarmist about this stuff… we are so screwed it’s beyond what most of us can imagine”

Thanks Aviv.

And then there’s our final front: Computational propaganda. This is the taking advantage of things like social media algorithms to wage widespread public influence campaigns. The vulnerability is cooked into the way that newsfeed algorithms like Facebook’s work. They prioritise content that is most engaging and shareable. One of the first things we spoke about on The Zip Files is that fake news is the most engaging and shareable of all news. As humans, we love novel and shocking stories. When you consider that 40% of the World’s population are on social media and that for many it is their primary way of staying informed, it is worrying to know that falsehoods are so readily spread on the platforms.

But then computational propaganda is not just about spreading fake news. Last week Congress released all of the adverts that Russia ran on Facebook during the 2016 US presidential election. They were designed to create division, not necessarily by way of disinformation, but rather by way of presenting divisive messages and providing spaces for like-minded people to gather.

So how do we fight back against the intentional spread of fear, uncertainty and doubt? How do we wage war on these three digital information fronts? One thing is certain, there is no time to lose. We need to develop a scalable way to spot high-quality fake videos, images, and audio — because soon every Tom, Dick, and Henrietta will be able to make them. We need to work towards stopping computational propaganda or at least reducing its effectiveness. And we need governments to regulate whilst Big Tech steps up to make sure that truth becomes the currency most traded on their platforms rather than falsehood.

There are two ways to fight fake video that seem to be scalable. Both initiatives come out of DARPA, The Defense Advanced Research Projects Agency in the US. One method algorithmically detects manipulations in images and videos to identify them as having been faked. The other method cross-references the data of the image or video across the entire internet, with the hope that if the content is fake, then it will flag its constituent parts and thus label it as synthetic rather than real.

Stopping computational propaganda might, in fact, be harder. There are again two approaches that are hopeful, yet barely developed. Firstly we might try to algorithmically collect and categorise instances of digital propaganda to identify bots and deliberately misleading accounts. Secondly, we could deploy a ‘good’ bot network to disrupt the bad bots. A sort of neutralising tactic. The latter seems kind of unideal.

Let’s not run to Mars just yet though. The thing is that tech can fix tech. As long as there are more good tech people than bad tech people, we should be okay. Unless we take our eye off the ball that is. And by the way, our eye has been firmly 180 degrees from the ball for a while now. Anyway if we can swing that eye back around, mix in some healthy regulation that protects privacy and restricts the abilities of companies and nations to hyper-target internet users **cough** GDPR **cough**, and Big Tech carries on crawling out of its “oh christ, we’re sorry, we didn’t realise we were so intertwined with the fabric of society ignorance cave”, then we should be fine. Hopefully, maybe. Fingers crossed. I’m still sleeping in my cupboard though and wearing tin foil gloves.

This piece was transcribed from The Zip Files — an irreverent weekly 20–25 minute podcast that I produce to help the busy millennial catch up with all of the week’s most important tech news. Here’s the episode in which this piece was featured:

On Apple Podcasts:

On Stitcher:

--

--

Evan Thomas
The Zip Files

Full-Stack Developer || Lead Teacher at Le Wagon || Podcast Host at The Zip Files