Less deep, more fake: The war for opinion making and truth formation in social media in times of the Corona virus
According to studies*, people today inform themselves about the world primarily through social media. The middle-aged at Facebook, the younger ones at Instagram and still very young at Snapchat, TikTok and Twitch. From these portals — and especially from their friends who are on the move there — they get their insights about the world. This creates an immediate world — a world they trust, a world they share with their narrow peer group. But this world is neither warm nor safe, it is shaken by threats that many of these users are not even aware of — and that will be much stronger in the future.
The manipulation of social media is one of the biggest media policy issues we are currently facing — and therefore urgently needs more analysis and research.
Through my previous work as senior editor in the field of user engagement and my voluntary work with young people in a township in South Africa, I have learned that “the” one world no longer exists. Individualisation, digitalisation and last but not least marginalisation create many worlds, alternative truths — and false realities.
There is no social security on social media anymore
Today’s media users are increasingly withdrawing into the supposed private sphere, setting their Instagram profile to invisible, practising cocooning, and trusting traditional media less and less. But the perceived security is deceptive: this is precisely where a consortium, an involuntary fate community of political actors, state autocrats and individual marginalized trolls, attacks — and sucks out the perceived private and instrumentalizes it for its own purposes.
The threats today are mainly fake news (news that is deliberately disseminated with the intention of manipulation), junk news (news that is also intended to disinform people in a sensationalist manner), data-driven advertisements, which use targeted targeting to provide only very specific target groups with only very specific messages (such as the US election ads on Facebook) and bots and other social agents used by countries such as China and, above all, Russia to demoralize and, in the worst case, instrumentalize the “opposing” populations in a targeted manner. In addition, there are individual actors such as trolls, harassment and hate speech, who make the Internet and social media exactly what it is not supposed to be according to its name: An antisocial place.
We are fast— our counterparts are often faster
In contrast, there are measures that have been implemented in different countries with varying intensity: State-monitored data protection (see DSGVO) and other state regulations such as the punishment of the deliberate distribution of fake news, the obligation to place transparent and comprehensible advertisements, the use of journalists and research networks as watchdogs, the use of automated monitoring and reporting portals; and last but not least, the AI-controlled checking of potential fake or deep fake news in social media.
The problem with this is that government regulation can quickly become over-regulation, authoritarian regimes and even private trolls show a steep learning curve (“Smartpurge”, a tool we used at my last employer to monitor social media, can hardly keep up with identifying problematic terms), data aggregators not really committed to real information like Facebook (junk news generates more interaction, Facebook ads are paid also by reach) and the focus on the symptoms and outcomes of manipulation instead of fighting the root cause of the problem: The interference of anti-democratic states in our democracy and their influence on our citizens, the manipulation of political parties to win voters, lack of media literacy, marginalization and the perceived dependence of large parts of the population.
Confirmation bias goes viral
Here is an example of young people’s lack of media literacy. It’s a paradox: According to studies, people are online more and more and for longer periods of time, but still find it difficult to verify information and distinguish fake from real news. According to research, junk news was shared more often than real news in elections in some countries and shows greater interaction. This is also because people, partly due to marginalization and dependence, WANT to believe that these things are true — and the more they are shared. Confirmation bias sends greetings.
For example, many young people in my township in South Africa currently believe that the current corona crisis was deliberately initiated by the Chinese leadership in order to deliberately decimate the oversized Chinese population. This perceived news is then actively shared, because it confirms one’s own marginalized world view in an unfavorable triology: The state is against us, we have experienced that ourselves, that must be true.
So there is still a lot to be done for the single state, data aggregators like Facebook, educational institutions and every individual. Let’s hope that in the future we will be deeper and less fake than our counterparts.
All cited studies are from: Bradshaw, Samantha: SOCIAL MEDIA MANIPULATION — Algorithms, Bots and Computational Propaganda, https://www.slideshare.net/CmpfEui/social-media-manipulation-algorithms-bots-and-computational-propaganda