Malicious Bots Actively Boosting the Far Right Ahead of EU Elections

by Emmi B, Alexander Reid Ross, and Sabrina Nardin

With weeks to go before the EU parliamentary elections, a vast network of bots has been discovered comprising some 12% of the most active users behind hashtags associated with the far-right grouping in EU parliament, Europe of Nations and Freedom.

Using a machine learning model that searches through hashtags for given parties, nations, and political groupings in the election, a bot-likelihood score is calculated for each account. Comparing the the percentage of bots to non-bots by nation, hashtag, and political grouping reveals a startlingly high proportion in excess of 1 in 10 for active accounts tweeting about the far-right political grouping in the elections.

Among the five countries we examined (UK, France, Germany, Italy, and Spain) the model found over 700 times bot-like accounts were influencing content surrounding the EU parliament elections and over 125,000 instances of users following bot-like accounts. The percentage of bot-like accounts for all parties for the five countries we considered was around 6%. Bots targeted Southern Europe the most, with Spain and Italy facing the highest percentage of bots.

Our findings are consonant with other recent studies, however the proportions of bot activity has not been exposed until now, adding to an understanding of the scope and scale of bot activity.

Context

The three-day EU elections set to take place from May 23–26 have been called “Europe’s most hackable elections” due to the lack of upgraded security systems and vulnerability to disinformation.

Concerns of an upcoming far-right advance in the EU Parliament are also mounting, due to Belgium’s political crisis, brought on by right-wing dissent to the UN migration pact, as well as recent elections for Senate in the Netherlands, which propelled the far-right Eurosceptic party, Forum for Democracy, from no seats to a tie for the country’s largest party. The Low Countries’ political paroxysms also come on the heels of the “Yellow Vest” movement in France, which was promoted by some two thousand bots, according to a report by cyber security firm New Knowledge.

In 2018, Spanish daily newspaper of record El Pais conducted a study of the Italian elections, finding powerful evidence that indicated the involvement of Russian media in malicious bot activity. More recently, the company SafeGuard Cyber identified 6,700 “bad actors” disseminating propaganda to some 241 million users, and clearly linked to Russian sources.

Results

What is clear from our present findings is that far-right parties in the Movement for a Europe of Nations and Freedom (MENF) and the Alliance for Conservatives and Reformists in Europe (ACRE) are receiving the majority of artificial discourse manipulation by bots. The MENF includes the Austrian Freedom Party and Italy’s far-right The League (Lega), both of which are part of ruling coalitions in their respective countries.

While it is near impossible to determine with perfect certainty who controls a botnet with only the information available to researchers, the focus on far-right and far-left supporting bots does mimic a strategy commonly pursued by Russian Intelligence agencies. Both the Freedom Party and The League also have tight connections with Moscow, including cooperation agreements. Another MENF party, France’s National Rally (previously known as the Front National), has received extensive financial support from the Kremlin, and The League is accused of secretly plotting to collect the profits of an oil deal involving representatives close to Putin.

The far-right Alternative for Germany party also participates in the MENF European parliamentary group (called Europe of Nations and Freedom), and has recently come under fire for having members allegedly involved in “active measures” — clandestine activity of members under Russian influence. In early April, documents appearing to be Russian strategy papers surfaced referring to German MP Markus Frohnmaier as potentially “absolutely controlled” by the Kremlin. Frohnmaier’s employee, Manuel Ochsenreiter has also been implicated in an alleged false flag attack carried out by Polish fascists against a Hungarian cultural center in Ukraine at the behest of Russian agents.

The ACRE Eurosceptic grouping includes Poland’s ruling Law and Order Party, which has cozied up to The League in recent months, as well as the UK’s Conservative Party and the Brothers of Italy (Fratelli d’Italia), a far-right political party that emerged from the “post-fascist” National Alliance.

Social Media and Right-Wing Strategy

The far-right parties were better than other groupings at leveraging social media in general throughout our findings. While the the far-right parties where crafting effective social media campaigns using clear hashtags such as #26maggiovotoLega (“Vote for Lega on the 26th of May”) and people like Salvini, current Italian Minister of Internal Affairs from Lega, were effectively using targeted videos and Facebook content to generate support, many of the leftist and centrist parties seemed out of their depths. Salvini’s effective use of social media is most clear in a network graph of #26maggiovotoLega in which his Twitter account is by far the most influential node in the huge model. The size of user handles is determined by their (Eigenvector) centrality to the in the network. All of the others but his are too small to read as a result. The different colors represent neighborhoods of connected accounts (Modularity class).

However, the social media influence of Salvini’s account is at least partially inflated by bots. The majority of all bots we found in the #26maggiovotoLega hashtag were created in May 2019 and almost exclusively retweet everything that Salvini’s account tweeted. In two days, from May 13 to May 15, the total number of tweets of these accounts drastically increased (from 7 to 428, on average), whereas the number of followers remained stable.

Many left and centrist parties in all countries studied didn’t even use hashtags or when they did they didn’t seem to understand how they worked and would just put a hashtag in front of any word they thought was interesting.

That the Italian far-right is so keen on the use of social media is also reflected in Salvini’s (and his social media manager Luca Morisi) use of Facebook, YouTube, and Instagram for their electoral campaign. A clear example of this is the social media game, ‘’Win Salvini” (“Vinci Salvini”) launched on May 10. In the game’s promotional video Salvini’s declares: “They are all against us, big newspapers, big professors, big intellectuals, analysts, and sociologists, but we use the internet, at least until they let it free. And we win online.” Further, Italy was the country most targeted by bots according to this research.

While accounts that are likely to be fully or largely automated (greater than .7 on the graph below) made up over 5% of EU election tweets from the five countries studied, that is still the minority of posts.

But at the scale of social media even 5% of Tweets can be a monumental impact. Especially when some elections are won by less than 50 votes.

This is 2500 tiny faces. The amount of accounts who followed bots tweeting in EU parliament hashtags for our target nations is at least 50 times this many — 125,000 people. That’s almost the entire population of Cambridge.

Bot Typology

There are many different types of bots, but for our purposes, we focus on central and peripheral automated accounts. The central bot is used to direct the influence of entire networks of bots, known as a botnet. These bots will often have their posts liked and shared by other peripheral amplification bots to artificially increase their impact. Further, these amplification bots will often rapidly promote “spam” hashtags to increase the virality of that hashtag.

Though likely not intentionally, the Liberal-Konservative Reformer, a Eurosceptic conservative party had a hashtag their followers were using, #LKR, get hijacked by a botnet. Fake accounts with attractive women as avatars rapidly posted cryptocurrency scam content geared towards Southeast Asians using the same hashtag. Most of these scammers had over 5000 followers despite posting basically the same content as each other. However, accidental or not, this still boosts the virality of the hashtag through Twitter’s algorithms.

Some automated accounts such as @DrTurleyTalks (utlizing many far-right EU parliament hashtags) are likely reflective of real people even though they use automated bot-like behavior to reach, in this case, over 25k followers with “anti-cultural marxism”, “Christian-nationalism”, and “anti-globalist” content. We detected many amplification bots that were clearly activated in order to manipulate this election, as within days of pulling tweets from them, their activity would explode from only having a few tweets to focusing heavily and actively on promoting and re-tweeting election related content for their particular affiliation.

This trend of far-right social-media manipulation found much of its power in the 2016 US presidential election in which Twitter acknowledged that it had discovered 36,746 accounts that generated automated, election-related content and were likely Russian in origin. Those accounts generated approximately 1.4 million automated, election-related Tweets, which collectively received approximately 288 million impressions. Discovering this didn’t stop the bot attacks though.

There are still numerous #MAGA (Make America Great Again) accounts that have huge amounts of mostly human followers despite being very likely bot troll accounts. For example there is one account with 41k followers that rapidly propagates pro-Trump slogans, conspiracies such as Q-anon, and seeks to discredit mainstream media critical of president Trump. Or another extremely bot-like account, with 21k followers that uses the threat of violence to advance its message and is also seemingly at least partially an influence operation on former general and former National Security Advisor Mike Flynn.

How We Did It

Using code that we made publicly accessible, we broke down the political groupings into the nations and political parties of current members. Then from looking at the pages of those parties and their supporters we found common hashtags related to those parties. We then pulled all of the recent tweets using those hashtags and sorted out the people who were using those hashtags the most.

Using a machine learning algorithm with over 1000 different features, such as the timing of posting behavior, use of hashtags, favoriting behavior, and sentiment analysis, we calculated a bot-likelihood score for each of these top accounts. We then calculated the percent of accounts that were very likely to be bots for each hashtag, nation, and political grouping.

As a result of the current limits of machine learning in detecting bots, the biggest issue of this study is that of false-negatives (though there are some false positives as well). In other words, the actual proportions of bots are likely to be higher. Machine learning is only good at finding what it’s already seen and the highest levels of attackers, now likely using things like OpenAIs GPT-2 model to further generate original content, are constantly innovating to avoid detection. That makes the problem of bot-detection “adversarial” which means that detection and evasion are constantly competing and raising the stakes. As a result, well-funded state and corporate actors seeking to manipulate elections, will often be a few steps ahead of the state of the art in detecting them. After each election cycle, researchers are given access to more data to further innovate, but meanwhile the bots get more and more effective at discourse manipulation.

What Can Be Done

While our findings may stoke fears, at its current state, the majority of low-hanging bots are not very convincing. Many of them have under 100 followers because their content remains qualitatively boring. Someone familiar with the internet can typically spot their excessive use of hashtags and tagging other related bots in posts lacking much original content from a mile away. Mostly all they do is retweet each other and viral content coupled with scandalous pictures of women but this often works to garner the attention of those least familiar with digital spaces. It is very common for these accounts to say things like “I follow back!” as a means of getting more followers and boosting their influence. However, these manipulations will continue to incorporate increasingly sophisticated mixtures of automation, alternative media echo-systems, and hand-tailored content to spread propaganda and confusion.

These attacks are both dispersed and orchestrated and they are happening across the world. We are entering a new era of information warfare in which our interconnectedness, at once a powerful tool for truth-seeking and freedom, will be more and more effectively wielded as a weapon of hybrid warfare. It’s therefore important not just to put pressure on our social-media providers to better moderate these threats and support researchers, but also to build users’ own skills of combating information deception. Critical elections such as that of the EU parliament that determine our collective futures depend on it.

--

--

Autonomous Disinformation Research Network
Autonomous Disinformation Research Network

Anti-fascist research collective using data science and research to promote positive freedom and reveal disinformation operations.