Online manipulation practices: astroturfing
Bad-faith actors use a variety of online manipulation practices to disrupt elections or damage the reputation of public figures. Some of these practices rely on traditional paid-for advertising. Yet, using the services of Google and Co. can be notoriously expensive. Services like AdSense or Facebook Pixel also leave an easily identifiable digital footprint. Instead, many manipulation campaigns rely on the seemingly organic spread of content — giving their message free exposure across social networks. How can bad-faith actors achieve this organic spread of content? One of the techniques is ‘astroturfing’.
What is astroturfing?
To understand astroturfing, let’s take a look the original meaning of the word. ‘AstroTurf’ is a brand of artificial turf that has become a generic trademark. Astroturf is any type of synthetic surface replicating the look and feel of natural grass. Over the past two decades, the term has increasingly been used as a verb — not in the context of sports surfaces, but politics. ‘Astroturfing’ is the attempt to replicate the look and feel not of natural grass, but ‘grassroots’ social movements. This way, bad-faith actors can create the illusion of popular support for any political agenda. This phenomenon is not exclusive to the internet. Yet, the logic of virality on most social media platforms have allowed bad-faith actors to radically expand their reach — at a low cost and low risk of detection.
Astroturfing is achieved through a variety of means, some more sophisticated than others. State actors will often use dedicated software to automate astroturfing strategies through the use of bots. These online campaigns can easily scale to influence the political discourse on social media. However, as bots are automated, their activity is often detected by platforms using their own automated verification tools.
Unfortunately, bad-faith actors are not dependent on automated bots for astroturfing. Anyone with basic skills can register multiple email accounts and create multiple online profiles by hand. If this is done on scale, these user profiles can coordinate their online activity for maximum impact. This tactic is extremely common among far-right groups. Ahead of the German parliamentary elections in 2017, these ‘troll armies’ engaged in so-called ‘raids’ or ‘memetic warfare’ by getting anti-Merkel and pro-AfD hashtags trending on Twitter or flooding comment sections on Facebook. These ‘troll armies’ often avoid automated detection because their activity resembles regular users.
What is the goal of astroturfing?
Astroturfing can be a means to a variety of different ends. A well-known example is the efforts by the American tobacco industry to overturn consumer protection regulation. By founding the National Smokers Alliance in the 1990s, the industry sought to replicate the grassroots mobilisation strategies of their civil society adversaries. These analog efforts have inspired more sophisticated campaigns — supercharged by the internet.
The Mueller report, for example, illustrated how an army of inauthentic accounts run by the infamous Russian Internet Research Agency (IRA) spread inflammatory content on social media. The IRA created Facebook groups with misleading names such as “Tea Party News”, “Black Matters”, “LGBT United” and “United Muslims of America”. The goal was to mimic grassroots support for Trump’s agenda while fostering widespread suspicion of Clinton — particularly among minorities.
States are not the only actors to have realised the potential of astroturfing in the era of social media. Nor are the motivations always solely ideological. Astroturfing is increasingly provided as a commercial service. In May 2019, Facebook detected an Israeli network of inauthentic accounts running multiple astroturfing campaigns in Africa, Latin America and Southeast Asia. According to the Atlantic Council’s Digital Forensic Lab, the Archimedes Group, a political marketing firm, created inauthentic accounts and Facebook pages en masse. These pages styled themselves as news outlets, fact-checking organisations and grassroots political campaigns, relying on inauthentic accounts to foster engagement.
Astroturfing has also made its way into popular culture. The super model Bella Hadid was the subject of a social media harassment campaign after posting an image on Instagram that some saw as an insult to Arab countries. Thousands of accounts targeted Hadid and brands associated with her, demanding they drop Hadid as a model or face a boycott. The tech start-up Astroscreen found that around 25% of this activity came from inauthentic accounts. These accounts repeated the same exact phrases, and news websites often quoted — and by extension amplified — their message. Once the campaign against Hadid subsided, the same accounts attacked other pop icons. Among them, Dua Lipa was targeted for sharing a petition calling for the release of imprisoned female activists. Nicki Minaj was also attacked for cancelling a planned concert in Jeddah, citing political concerns.
What can be done about astroturfing?
While most big online platforms have some kind of detection mechanisms in place to detect coordinated inauthentic activity, Twitter has acknowledged it cannot tackle the problem by itself. On the other hand, some have suggested that Facebook might have a commercial interest in maintaining fake accounts on its platform — as a larger user base means greater advertising revenue.
Most platforms deploy a form of automated anomaly detection to filter inauthentic activity. Anomaly detection deploys machine learning techniques to build a model of ‘normal’ behaviour of a given system. These systems can cover online traffic to a website or even credit card spending habits. In this way, companies are warned every time the system exhibits behaviour outside of what is considered ‘normal’.
However, as disinformation strategies such as astroturfing are becoming more sophisticated, platforms and their detection tools are struggling to catch up. As described above, astroturfing driven by troll armies rather than bots is much harder to detect. What can be done against inauthentic accounts that manage to bypass the broad-meshed detection tools of the major platforms?
The recent trend from single-platform, spam-like ‘information warfare’ to targeted, cross-platform ‘narrative competition’ calls for a more granular approach to detecting astroturfing campaigns. While the platforms are trying to catch up with these developments, it is up to the users themselves to remain vigilant when interacting with suspicious accounts on social media. On the other hand, journalists and major broadcasters in the UK are beginning to step up to the plate to hold political actors that engage in astroturfing attempts to account. The backlash to the decision by the official Conservative Press Office to temporarily present itself as a seemingly a-partisan fact-checking account is a good indication of the growing awareness for, but also mainstreaming of, astroturfing campaigns.