Why is Facebook so useful for information warfare?
Russian ads around the US election, pro-Brexit influencing, Trump’s twitter monologue, etc. Information warfare is not a new phenomenon by any means, but it’s gained significantly more gravity and influence lately. Why is it that the modern social media platforms are such effective instruments for spreading it?
First things first, in order to examine the mechanisms let’s define information warfare. According to Techopedia it’s the tactical and strategic use of information to gain an advantage. A key method is influence campaigns, defined by Swedish Civil Contingencies Agency as coordinated activities by foreign powers including the use of misleading or inaccurate information, to influence political and public decision-making, public opinion, or opinions in another country.
The first thought about info warfare and social media is of course that as Facebook (let’s use Facebook as an example, even though this applies to most advertising-funded social platforms) has so many users, it’s a good platform for spreading influential disinformation and such. It’s true, and it’s the same reason radio and tv increased the effectiveness of information warfare when they were widely adopted. However, some of the mechanisms on which Facebook is built make it an even more efficient as an information warfare platform than radio and tv. And the scary thing is, those mechanisms are embedded in the foundation of Facebook’s business model, so it’s not likely to change despite the nice intentions that Facebook’s executives keep repeating.
Everyone knows Facebook makes its money by showing targeted ads. The more ads they show (and get their users to click on), the more their clients pay them. Of course, the more minutes users spend on the platform each day, the more ads Facebook is able to show them. Naturally then, as a profit seeking company, Facebook’s algorithms are built to optimize the time spent in the platform by its users. So far pretty obvious.
But what makes people use more time scrolling their newsfeed? It’s engagement, which is what they’ve also publicly said they’re after. But how do you get and keep people engaged? People are in general more engaged with content that makes them feel something, and maybe even react to it. As neutral things aren’t as engaging, the algorithms naturally favor more positive or negative items. That doesn’t sound too bad yet.
If we now take a step aside and consider the age-old talking topic of why the news are generally so much more negative than positive, we find that the same principles affect social media platforms as well. News generally don’t contain more negative articles because not enough positive things would be happening in the world, but simply because more people click and read (so engage with) the negative articles. That’s also why shocking news spread much more effectively than reaffirming news. Recent research has found that false news spread faster and further than true news in Twitter, and the emotions they inspire are largely negative (fear, disgust, and surprise).
If this principle applies to Facebook content as well, it means that the engagement-optimizing algorithms which determine exactly what content you see, will inadvertently prioritize and show negative content for you from across your extended circles and connections. Now before you dispute this by pointing out that you react with ‘like’ more often than a sad emoji, let’s clarify that the sort of negative content I refer to is usually things you agree with. For example, the post below aims to gather likes, shares, and supporting comments, but it does so through inciting negative emotions towards the pictured figures. This post happens to be from a newly uncovered disinformation campaign removed by Facebook (source: New York Times).
So if this is the type of content that Facebook’s algorithms prefer, even slightly, and it’s widely known and exploited for information warfare purposes, is Facebook doing something about it? Maybe, but the insufficiency of the changes has been pointed out by more and more people recently. For example, Facebook’s new feature meant to increase trust in advertising can easily be utilized by fraudulent parties for increasing the believability of their influence campaigns. The one thing Facebook could do that would have the biggest impact is restricting their algorithm from prioritizing engaging content if it’s strongly negative. But as engagement would decrease, so would their profits, and that makes Facebook unlikely to resort to stifling their algorithms. The future direction of Facebook, and social media in general, is hazy, but their utility in information warfare will in all likelihood be a defining factor.