Information Warfare: Russian Propaganda, IRA and How to Hack Society Through Social Media
Manipulating the masses is easier than ever, and social media platforms are doing nothing about it. We need regulations.
I remember this vividly. It was 2016 and I happened to be living in Shanghai, China. I worked with an international team, and I met people from all around the world. Americans, British, Spanish, French, you name it. And I remember these arguments of what if Trump wins…? Definitely none of my buddies expected that.
Trump won. Everybody was surprised, especially my American colleagues. And then Brexit happened. It made no sense. My British friends couldn’t believe that either. And then I started to notice how in my country (Spain), the tension with the Catalan independence increased as we hadn’t seen in years.
Back in 2016 you couldn’t help but notice that something was going on. You could feel it. Was it coincidence?
I hadn’t connected the dots at that point. But then in 2017, I happened to be in Myanmar. You could feel the hatred and pride and alienation in the streets. And this situation didn’t turn out well. In 2018 there was a genocide incited on Facebook, with posts from Myanmar’s military.
Why did this happen? Is there any connection between all these events? Definitely yes.
Today we no longer have problems understanding what’s going on here. In some cases we have clear data, in others we don’t. But this is what’s happening: Today mass manipulation is easier and more impactful than ever before. There are adversaries exploiting our vulnerabilities. Social media platforms avoid any responsibility, so they can protect their business models. And regulators don’t tackle the problem properly.
We all have heard about Cambridge Analytica and AggregateIQ regarding the 2016 US election and Brexit. But it’s about time we have some serious conversations about IRA, Russian propaganda and the ridiculous power of social media platforms that enable these scenarios. But even more important, there’s not enough talking of the real reason this can happen in the first place.
How to manipulate the masses.
Social media platforms are the best way to manipulate the masses. And if you think about it, there are three points that make this possible:
First, mass consolidation of audiences on just a few platforms. So, if you want to manipulate a campaign or something, you just need to control a few platforms and you’re good to go.
Second, the targeting precision is mind-blowing. These platforms make money because people spend time on them. They’re attention brokers. So they gather information on users for two purposes: (1) They know what to show them in order to keep them on the platform; and (2) This information helps advertisers target them.
Third, you can play algorithms easily. First of all, algorithms can’t (yet) tell the difference between right and wrong. They just can’t. What they can tell you is engagement — which content gets clicks. Algorithms know that, if they show this content to that person, they’re gonna spend more time on the platform. And you know what kind of content gets more clicks? Outrage. Ask TV channels… The point is, algorithms can’t tell when they’re causing harm. Algorithms don’t care, because they weren’t designed for that purpose. And when manipulating the masses, it’s not that complicated to game these algorithms by giving them what they want: Content to keep people on the platform.
The thing here is that all these social media platforms present a real challenge as recommendation engines. They’re media companies that aren’t getting enough requirement of culpability and responsibility.
But why does this matter? Because they’re curators. They surface things, and sometimes the things they surface are not in our best interest.
When you connect the dots here, you realize that these three points lead us to a debate about the “gap” in our understanding of freedom of speech. Taking it to the extremes presents a few challenges and external organizations take advantage of the situation. The real question here is:
Should we stop recommendation engines from showing content we don’t want them to show, or just suggesting that is considered as a kind of censorship?
That’s a hell of a question — not easy to answer. Anyway, before we rush, let’s first understand the big picture, and maybe then we’ll be able to answer it. So, where do we start? In 2016.
2016 was a turning point. A lot of things happened, from attempts to hack state voting systems, to cyber-attacks on the Democratic National Committee leaking emails and other materials (which happened to be in disadvantage for Clinton). But this turning point was about an information warfare that had just gotten started. Even though this war has been going on for years, in 2016 we started to recognize the power of social influence based on disinformation, using bots and fake personas to target specific groups.
Cambridge Analytica and AggregateIQ are just the tip of the iceberg. Now it’s time to talk about IRA, the Russian organization who knows how to manipulate the masses.
IRA: Hacking the American culture.
The Internet Research Agency (IRA) is a Russian company, based in Saint Petersburg. You can think of them as a social media marketing agency combined with an intelligence agency.
IRA started out spreading propaganda to Russian citizens and Ukrainians. It started on Twitter with the invasion of Crimea, where they created accounts as Crimean citizens celebrating the annexation vote. Then they continued with conspiracy theories against Ukraine, spreading disinformation about the MH17, where a missile shot down a Malaysia Airlines flight killing 298 people on board.
It was focused inward, but not for long. Around 2015 they focused on the United States, under an operation known as Project Lakhta.
This precedes the 2016 US election. It wasn’t a quick one-off tactic. They were in it for the long haul. This wasn’t a one-off social media operation where they messed around with the US election, but a long game where they developed trusted relationships with Americans.
Project Lakhta: Pride and tribalism.
If you check New Knowledge’s magnificent report (if you want to dive deep, you should read it), you’ll see that it wasn’t just about exploding divisions on society with some memes. It was much more than that. They built communities, pages, fake personas pretending to be Americans, appealing to tribalism.
And when you get into the details of who was targeted and how they played one community against the other, you discover that this operation was more sophisticated and dehumanizing anyone had imagined.
From a communication point of view they did something remarkable. In marketing, when you start building a brand, creating a campaign or something along those lines, before you do anything else you’ve got to sit down and get the basics right. You use a white board and start sketching ideas. You look for ideas so you can answer this question: What is it about?
Once you know that, then you can start strategizing and find ways to make it happen.
IRA knew exactly what was all about: pride. And the way to get there? Alienate communities through a constant drumbeat of pride.
I’m a marketer by training. And I’m obsessed with the communication process — which means that I focus a lot of my attention on understanding how the the mind works and how brands, governments, agencies or whoever that is, move people (emotionally) from point a to point b.
The thing to understand here is that, the communities IRA created, in a sense, were brands. And when you see the world through that lens, you start getting interesting observations.
There are great books about the relationship between brands and tribes. Fantastic books like Tribes, or The True Believer, or even The Culting of Brands. All these books appeal to our need to belong. And that doesn’t mean it’s a bad thing — there are tribes everywhere. You probably belong to more than one.
The problem comes when you use this knowledge and exploit every single vulnerability in human behavior and alienate people.
In the book The Culting of Brands, Douglas Atkin, global head of community at Airbnb says the following:
“The common belief is that people join cults to conform. Actually, the very opposite is true. They join to become more individual. At the heart of the desire to join a cult, in fact any community to which you will become committed, is a paradox.”
“The cult paradox dynamic can be looked at in terms of these four basic steps:
1. An individual might have a feeling of difference, even alienation from the world around them.
2. This leads to openness to or searching for a more compatible environment.
3. They are likely to feel a sense of security or safety in a place where one’s difference from the outside world is seen as a virtue, not a handicap.
4. This presents the circumstances for self-actualization within a group of like-minded others who celebrate the individual for being himself.”
And this is in a nutshell what IRA did.
So if you analyze every community IRA created, you can check every box on those four steps.
They created extended relationships with Americans. And these pages were all designed around the idea of pride. They reinforced it again and again. They created tribes. Then, drip by drip, they sneaked in content that was either political or divisive.
And they were able to push this content to millions of people using ads to reach their targets.
Then they sold them merchandise. Because that’s how they boosted their need to belong: by wearing something people can see that identifies them with that tribe.
And they didn’t just make money out of it, but also gathered information about their customers. Which allowed them to retarget them. Leading to more manipulation.
They had long developed relationships. And the way they played this out was by pushing the idea of pride over a year, and then when the election came, they’d say things like “as black people, we can’t vote for Hillary.”
They created rage. And people were engaging in these conversations on a daily basis thinking it was a real community.
But this wasn’t just about some memes on social media. They were reaching out individuals from the community to bring this operation into the real world.
And that’s an intelligent way to intensify that rage: getting things from online to offline. They managed to organize protests, even though no one was in charge. Just set up a Facebook event, and people who followed their pages would sign up for those. And if you just want to make sure these people are actually going to go, you just send them a message and confirm it.
So far so good. But when you set up two protests of opposite groups in the same place (literally across the street from each other), then fireworks will ensue. And that’s what happened in Texas when they assembled anti-Muslim and pro-Muslims protests.
I’m sure they were having fun. Their offices must’ve looked like a comedy writer’s room. Especially when in 2017 they started to screw around with the idea that Russia was involved in all of this. Russians joking about Russia interference.
And while people were having fun with their memes, Russians were achieving their goal: infiltrate activist communities. Hey, it’s not a new tactic. It’s been happening since the Cold War. But the scope is greater today. And not only that, they can even do it remotely. They don’t have to send spies, they just engage directly with American citizens.
And still, people just think this is a media problem… This is much more than that.
This is much more than a media problem. It’s an information warfare.
When people think about war, usually, they think about destruction of physical infrastructure, deaths and combats. And if you ask people to think a little bit more on what form wars can take, they might even come up with cyber-attacks on infrastructure (which happens) — but the game’s changed.
In today’s world wars take a different form. Now adversaries analyze all the vulnerabilities and find ways to crack their enemies from unexpected places. And social media has made it crystal clear: They’ve found that what makes us strong also makes us vulnerable. They’ve recognized that we have a fundamental democratic society with freedom of speech and expression. And that’s exactly where they’re hitting us. Hard.
And when I say adversaries, I don’t just mean external agents, but internal too. I mean whoever that seeks to exploit and manipulate the masses to serve a purpose that’s not on our best interest.
We need some serious regulations. We need vigilance.
So here we come back to our initial dilemma:
Should we stop recommendation engines from showing content we don’t want them to show, or just suggesting that is considered as a kind of censorship?
Now, knowing what you know, what do we do?
Before you started reading this article, your System 1 (your irrational and impulsive part of your brain) would’ve answered right away without hesitation. But it’s not as clear as it looks.
We’ve got this dilemma with social media platforms. We want them to allow us put into practice our freedom of speech. But at the same time, we don’t want them to filter it and oppress our speech. What should we do?
This situation looks like these superhero movies where the good guy (apparently, our governments, not sure about that sometimes though) are about to do something. And that dilemma in the movies usually is a battle about doing something bad for a greater good — the problem is that that greater good can get fuzzy along the way and we don’t know with clarity what that is.
“You either die a hero, or you live long enough to see yourself become the villain” as Harvey Dent says in The Dark Knight.
But delaying the problem won’t solve anything. We need to tackle it. Now.
We’ve got a fundamental democratic society, and these platforms and adversaries are screwing with it. They’re messing with our freedom of speech, freedom of expression.
This is an asymmetric war. It’s asymmetrical because we allow every form of speech as long as it’s not a direct call to violence. And again, what makes us great, also makes us vulnerable. The truth is that there isn’t an easy workaround here.
Maybe we don’t have the answers for this dilemma yet, but that doesn’t mean we shouldn’t do something about it.
I do know a few things we can do. We can do something as citizens, but we need to seriously go to the root of the problem. And the fact that these things can happen in the first place is astonishing. But also we need to make exponential changes here, because right now, we’re just playing catch-up — we totally missed the boat.
There are three things that we must do right now:
1. Digital Hygiene.
Do you remember the early days of the Internet? Whether you were in forums, MySpace, or hanging around blogs, I bet you didn’t know exactly who you were interacting with. You were skeptic. But that changed when Facebook showed up.
Facebook from day one knew that their main advantage was that everybody was who said they were. It was a platform were everybody had their true name. So, over time, it reduced that skepticism and people started to be more trusting of who they were engaging with online. And here, in a nutshell, is what makes these platforms suitable for manipulation.
But now we need some serious detox and adopt some digital hygiene measures. We need back some of that healthy skepticism prior Facebook times.
The reason these situations happen is due to our cognitive vulnerabilities. As human beings we’re biased, and we take sides. It’s human nature. We can’t change that. What we can do, though, is to educate people on these issues.
The thing is, we’re all vulnerable. Nobody is immune. But the worse ones are the people who think that these tricks don’t work on them… They work especially on them.
There has to be some education on this.
Consider Estonia. One quarter of their population is Russian-speaking and receive its news from Russian media. So in order to avoid any propaganda to spread or avoid an invasion, they’re more sensible when it comes to educate their citizens about propaganda, and what type of news are reliable.
This makes total sense, but most western countries have a problem… There’s a (well deserved) skepticism toward mainstream media.
This lack of trust is a fundamental problem, because when you avoid traditional media, and find your own sources on the Internet, you’re more vulnerable. Not because traditional media is trustworthy, but because you no longer have that skepticism toward digital media. (And this is the reason influencer marketing is so popular these days.)
Governments should be more transparent and communicate these propaganda attacks to their citizens.
Either way, let’s not play the blame game and move forward. We need some digital hygiene. It’s not about being paranoid, but try to be aware that, on the Internet, you might not be dealing with a person or who you think you’re dealing with. We need to come back to the mindset we had on the early days of the Internet.
2. First line of defense.
The problem with these social (publishing) platforms is that they have a disease ingrained very deeply in them: advertising.
Twitter doesn’t seem to have a clearly defined business model, but Facebook does. It’s a business model pretty well built. It’s a cash cow. And it’s a business model that rewards outrage and sensationalism. And it’s crazy they let them keep doing this. They say they’re trying to curate against those, but in the end, they amplify them because it’s their business model.
So, they prefer to ignore the evident and keep making big dollars with their system.
Let’s get serious now. We’re at war. This isn’t a peacetime problem you can solve by pushing social media platforms to moderate a little bit better. We need regulations. Global regulations. Ads need to be regulated. For good. We can’t tolerate their behavior. Especially when someone with an account in Russia spends millions of dollars targeting the US market. (Yes, they can use VPNs, but as they say ‘follow the money’.)
I hate the thinking of a company has to make a profit. That’s true, and it’s totally legitimate. But I believe we’re in a time where companies should have a social responsibility — especially when they touch billions of people directly.
Some people might hate this but, these platforms have to be in communication with governments — information needs to flow. Now, this isn’t an excuse to violate people’s privacy.
Information sharing is critical if we want to preserve our privacy to the ultimate level: our own thoughts.
In the early days of Facebook nobody foresaw the consequences of connecting everybody. But this is exactly what happens when you give everyone a window into everyone else´s life, and a direct way to communicate.
All these platforms are our first line of defense. So it’s non-sense that they even excuse themselves from that responsibility.
According to New Knowledge’s report, Twitter, Facebook, and Google “did the bare minimum possible” to meet the Senate committee’s requests. The thing is, once it became clear that these platforms played a key role in the election, it’s not too crazy to think they might have removed critical metadata from the information they submitted.
If you take a look at their report, you’ll find out that 96% of the content targeted African-Americans. Yet Alphabet (Google) denies the evident.
None of them are cooperating. In fact, there were a lot of internal efforts at Facebook to ignore the Russian problem. Sheryl Sandberg even accused Alex Stamos (former Chief Security Officer at Facebook) of betrayal for raising up the concern to the board members the problem with the Russians. They just delay the issue and deny it, over and over.
They deny their responsibility. And when the only ones who can protect us in this information warfare don’t want to cooperate, that leave us in a pretty bad scenario.
Of course it is their responsibility. It absolutely is. If these actors are discouraging people from voting, they should shut it down or do something about it. Or at least flag it, damn it.
These adversaries are using ads to reach their targets. Isn’t it about time we regulate the advertising industry? Today, looking back to the subliminal ad days, it looks crazy they did that, right? But why doesn’t look crazy today our situation? The scope is way bigger and they have better tools to manipulate the masses, yet regulators aren’t getting serious.
There’s enough BS going around with easy workarounds to escape from legal responsibility. We need to demand these platforms legal responsibility and put it to work. Private interests must not prevail public interests. Ever.
3. Now or never. We need to change exponentially.
At some point in the near future there won’t be people manually crafting these messages, but an AI agent that does that for them. And unless governments catch up and do something about it, we’ll be left alone. But that might not even be enough…
Elon Musk said in an interview in Axios (HBO):
“Probably a bigger risk than being hunted down by a drone is that AI would be used to make incredibly effective propaganda that we’re not seeing like propaganda.
“Influence the direction of society. Influence elections. Artificial Intelligence just hones the message, hones the message, check, looks at the feedback, makes this message slightly better… within milliseconds it could adapt its message and shift and react to news. And there’s so many social media accounts out there that are not people — like how do you know it’s person or not a person.”
“The way in which a regulation is put in place is slow and linear. And we are facing an exponential threat. If you have a linear response to an exponential threat, it’s quite likely the exponential threat will win. That in a nutshell is the issue.”
The funny thing is that this technology already exists. These adversaries are using exponential technology — technology that gets better every day. But regulations are linear. And when I see the pace governments and organizations are moving, the only thing I think about is that we look like children in a playground.
You can’t help but notice how unprepared we are to deal with these challenges. People don’t even think this is a global problem.
The safety of the whole line
Steven Pressfield (one of my favorite writers) wrote in his book The Battle of Thermopylae (the battle you see in the movie 300):
“Spartans excuse without penalty the warrior who loses his helmet or breastplate in battle, but punish the loss of all citizenship rights the man who discards his shield. A warrior carries helmet and breastplate for his own protection, but his shield for the safety of the whole line.”
In a way, these platforms are our shield. We need to protect it individually, but collectively too. Governments need to focus more on these shields. In other words, we need to focus on what really matters, and not think short-term just because presidencies last for four years.
We need to take the leap and up our game exponentially. And, relentlessly, punish those who discard our shields. Because in this information warfare, our enemies are ruthless.
This propaganda war continues in 2019. There are bots all over the Internet. There they are, infiltrated in our communities, unseen. They’re ready, available to be used in the future for another malicious strategy.
We know we’re full of sh*t. And yet we’re doing very little to get ourselves out of this. Instead, we’re just playing the blame game.
Platforms are not incentivized to cooperate. Their business models are clear. It’s better for them to violate our privacy to the limit, assume any penalty they get, and then move on. And on the other hand, regulators just look for a quick win with stupid short-term tactical bills like the Bot Law.
Welcome to the Information Warfare. Fasten your seat-belt. This is gonna be a hell of a ride.