The Proliferation of Social Media — Book Review and the Wicked Problem

An excursion into the disinformation and online extremism problem followed by a brief discussion of possible solutions.

Jan Rixgens
The Startup
8 min readJan 28, 2021

--

Picture by Franki Chamaki

Recently, I got back into a topic that endlessly fascinates me. It’s what I like to call the proliferation of social media, basically the widespread and deliberate distribution of disinformation, and the way our social discourse is affected by that.

Here I’m going to discuss two books on that topic that describe a narrow, yet highly important perspective about how public opinion is being influenced, how individuals are being radicalized, and how the current democratic polarization is being exacerbated by social media. In a second step, I carefully look at possible solutions to those complex problems.

But first, let’s quickly assess the size of the problem of disinformation, or “fake news” as is en vogue.

Disinformation — No Recent Development

Disinformation, as distinguished from misinformation, is about the attempt to influence what we think and how we act. Without a doubt, this has been around for a long time (first records date back to 44BC around Julius Caesar’s assassination), yet the scale of disinformation operations has clearly increased over the last decade and has been fueled by social media.

Still, it wasn’t until the 2016 US Presidential Election that the term “fake news” gained mainstream adoption — and unironically enabled its main beneficiary to move into the White House. A brief Google Trends search shows the relative popularity of “fake news”, and how it has continued to occupy the minds of Google users around the world. Interestingly, the country with the highest proportional search volume for “fake news” over the past 5 years is not the US or a European state. It’s Brazil, followed by Singapore and the Philippines. While Google trends alone are clearly no sufficient proof for inference, we can see that disinformation is a truly global phenomenon.

Google Trends results for “fake news” worldwide and over the past 5 years

Christopher Wylie and Cambridge Analytica

In his book “Mindf*ck”, whistleblower Christopher Wylie lays out his account of how the UK-based SCL group, and its later subsidiary Cambridge Analytica, were able to leverage Facebook user data to spread disinformation amongst voters during the 2016 US Presidential Election. What reads like a best-selling dystopian thriller on psychological warfare describes the crude reality of how information operation tactics and weak governance of private companies (most notably Facebook) influenced democratic elections on an unprecedented scale.

Wylie’s whistleblower story has made history, in large part thanks to the excellent reporting by British journalist Carole Cadwalladr and her coverage in the Guardian. Cadwalladr details a complex web of middlemen involved, from Trump’s ex-Chief Strategist Steve Bannon to hedge fund owner Robert Mercer.

After the unmasking of Facebook’s laissez-faire role in the Cambridge Analytica scandal, Mark Zuckerberg had to testify before the US Senate and in 2018 met with EU politicians to discuss the social network’s role in national democratic elections, its advertising-based business models, and more broadly its alleged monopoly situation over social media. What became apparent was that policymakers on both sides of the Atlantic struggled to understand the basic economics of Facebook and how the company profits from monetizing user data. Asked by Sen. Hatch on how Facebook could be free for users since they don’t pay for the service, Zuckerberg explained the social media 101: “We run ads, senator.

Julia Ebner on Extremists Online Groups

In “Going Dark”, Julia Ebner characterizes the “secret social lives of extremists” and vividly pictures how extremist groups form, thrive, and recruit online. Ebner is a researcher at the London-based counter-extremism think-tank Institute for Strategic Dialogue and decided that to really understand the operations of radical online communities, she would have to go undercover and become a part of them. After two years of intense systematic research — among members of the radical “Trad Wives” community, during Neo-Nazi rock-festivals in Germany or in ISIS-Telegram chats — Ebner reveals the dark corners of the internet that are inaccessible to most of us.

What struck me most about “Going Dark” is how well-organized most of the communities are and that they offer their members a strong feeling of purpose, in many cases an own cult. Ebner’s research on “Generation Identity”, a major European far-right movement with ties to anti-immigrant parties, exposes a network of young and social-media-savvy extremists that know too well how to create buzz and generate clicks.

Then, there is the role of gamification: the idea of using gaming techniques to create engagement in a community. Reporting about 8chan forums and “meme-wars”, Ebner addresses that for many of its users, the lines between online-trolling and real-world harm became blurry. One of the lowlights of this development was the infamous Christchurch shooting, where Brenton Harrison Tarrant — having become radicalized online — killed 49 people in an attack on two mosques and live-streamed his attack on Facebook live.

Ultimately, Ebner gives a chilling image of how close radicalization and trolling campaigns are to the lives of social media users:

“An American schoolkid playing Fortnite in his bedroom may unwittingly become an agent for Russian propaganda, a Malaysian teenager can be recruited as a regional correspondent for ISIS fighters in Iraq and a German Facebook user commenting on a Süddeutsche article might turn into the target of a large-scale trolling storm.”

After reading “Going Dark”, especially after digesting Ebner’s closing remarks where she interviews leading counter-extremism experts about the top concerns of the next five years, it becomes obvious that much more attention will need to be devoted to combatting online extremism and disinformation. The fact that most extremism communities use publicly available technologies like Discord channels or Telegram groups makes it even more challenging. While there is certainly no silver bullet for solving this problem, a mix of initiatives might bring us closer to a more civilized and productive online exchange, as I lay out in the next section.

A Wicked Problem — and 3 Ways Forward

The problems that Wylie and Ebner describe in their books — from large-scale disinformation campaigns to online extremism — neatly fit the description of a wicked problem, one that is impossible to solve due to its complex and contradictory nature. Surely, no one approach will stop the fact that people are vulnerable to disinformation, and no one policy will change the polarization on social media.

Still, the costs of inaction are very high as shown in the most recent storm of the US Capitol that was partly planned on Facebook.

So how could mitigation mechanisms look like? There are at least three strands of solution that could help to alleviate the aforementioned problems. This list is by no means exhaustive, and each strand could occupy a single essay, if not a professional career. Focus here lies on brevity.

Educating Citizens

One catalyst for the striking success of disinformation is that overall information flows have steadily increased while the world has become more complex. Understandably, tracking actual sources of information is time-consuming and possibly uncomfortable. Campaigns by fact checking organizations or governments can help overcome some of this problem by educating citizens early on (see how Finland is doing it right), thereby strengthening resilience among users ex-ante.

Ex-post, after a source containing disinformation is published, different mechanisms can possibly limit some of the damage. Twitter launched their own fact-checking tool, Birdwatch, to crowdsource checking tweets. While this system can also be played by malicious players or bots, the overall intentions are good.

Then, there are innovative startups like Factmata that aim to help users in identifying potentially misleading content by using Natural Language Processing.

Policy and Regulation

A second way of tackling disinformation and online extremism is regulation. This includes different tools, from strengthening consumer’s data privacy laws to content moderation rules, and even to changes in intermediary liability.

The European Union, home to the world’s strongest Data Privacy laws, presented their new online-services legislation in December 2020. The Digital Services Act will complement the former Electronic Commerce Directive from 2000 and sets out clear rules for all digital services such as social media platforms, online marketplaces or app stores. Large online service providers surpassing 45M users (10% of the EU population) are given special consideration and need to adhere to external auditing and data sharing rules. Nevertheless, the new regulation will not explicitly tackle disinformation, which social networks in the EU currently self-regulate on the basis of best practices.

While it might take several years until ratification, the Digital Services Act will likely change the way we consume social media. Hopefully, it’ll be for the better.

Business Models

Lastly, some of the current disinformation problem can be attributed to the advertising-based business models of big technology companies. Consumers are being targeted by advertisements tailored to their preferences, leaving them much more receptive to disinformation. While publishers themselves largely switched to subscription-based models online, why shouldn’t social networks as well?

At least, this is what some people in tech think. By changing to a subscription-based service, revenues could be predicted more precisely while simultaneously gaining independence from ad-dollars. Sure, Facebook would have less incentives to pivot away from their highly profitable business model. But what if Twitter was the daring one?

Of course, there is big IF to this question. Still, generally speaking, a change in business models of social media would likely lead to a less polarized exchange online. Should this bring positive returns for shareholders, then venture money would follow, leading new businesses to consider subscriptions over free data-harvesting models.

The most recent concerns about WhatsApp’s privacy policy update and the shift to encrypted messengers like Signal show that users value privacy more than ever. Strong network effects and consolidation in social media, however, will make switching to other services difficult. Especially Facebook has mastered locking users into their vertical product line.

Looking Ahead

Disinformation and online extremism are most certainly some of today’s more pressing questions as they constitute real wicked problems. The road ahead is certainly bumpy and finding attainable solutions will require collaboration between businesses, policymakers, and society. The two books discussed helped me size up the issue better and I can recommend them to anyone interested in technology and/or policy.

Definitely, there is no silver bullet to solving the disinformation and online polarization problem, but with joint effort some of the current symptoms can be alleviated successfully. Importantly, policymakers will need to understand the underlying mechanisms at work while big tech companies need to acknowledge the responsibility they bear.

A note about the author

My name is Jan, and after spending two years in Strategy & Operations at Airbnb, I’m currently studying Public Policy at University College London.

Big thanks to Andrew Bennett for his review and suggested improvements.

--

--

Jan Rixgens
The Startup

Technology, Policy, Society. Ex Strategy & Operations @airbnb .Public Policy student @UCL , Alum @MaastrichtU , @FoundersClubUM