On October 17, Twitter released an archive of over ten million tweets posted by accounts from 2013 through 2018. Of the total, over nine million tweets were attributable to 3,800 accounts affiliated with the Internet Research Agency, also known as Russia’s infamous St. Petersburg troll factory. Over one million tweets were attributable to 770 accounts, originating from Iran.
Each set is included in the same archive; however, because the actors and activity were separate, our analysis was conducted accordingly.
In an effort to promote shared understanding of the vulnerabilities exploited by various types on online influence operations, as well as social media’s role in democracy, @DFRlab had a brief advance opportunity to analyze the nearly complete archive.
In a release, Twitter noted:
In line with our strong principles of transparency and with the goal of improving understanding of foreign influence and information campaigns, we are releasing the full, comprehensive archives of the Tweets and media that are connected with these two previously disclosed and potentially state-backed operations on our service.
A majority of accounts disclosed have been reported or archived before.
An enormous amount of effort has been done by researchers, journalists (examples here and here), policy makers, and Twitter itself. With regard to broader understanding about the IRA, much credit is due to investigative journalists in Russia, who reported on the purpose and operations well before it caught international attentions. The same can be said of FireEye’s analysis of Iranian websites and associated social media activity.
What sets this archive apart is Twitter’s consolidation and release of all accounts the platform maintains high confidence are associated with the Russian Internet Research Agency and separate Iranian accounts.
These are the seven most important points to know about the Russian and Iranian troll farm operations. Parts two, three, and four of this series take deep dives into each troll farm, their impact, and implications.
1. All Content Points Home
Both troll operations put their governments’ needs first. Russia’s troll operation primarily targeted Russian speakers, while Iran’s focused on pushing regime messaging abroad by promoting aligned websites.
The Russian troll farm posted significantly more in Russian than in English, especially in late 2014 until early 2015, when Russia was fighting an undeclared war in Ukraine and facing anti-corruption demonstrations at home.
The Russian operation’s subsequent use of English-language posting showed how a capability designed for domestic influence could be turned abroad.
2. Multiple Goals
The Russian operation had multiple and evolving goals. One main purpose was to interfere in the U.S. presidential election and prevent Hillary Clinton’s victory, but it was also aimed at dividing polarized online communities in the U.S., unifying support for Russia’s international interests, and breaking down trust in U.S. institutions.
3. Community Targeting
Both operations targeted highly engaged, highly polarized online communities, especially in the United States. The Russian operation attempted to infiltrate and polarize them, while the Iranian operation tried to message them.
Any attempts to increase domestic resilience should prioritize working with such communities.
4. Equal-Opportunity Troll Farms
The Russian trolls were non-partisan: they tried to inflame everybody, regardless of race, creed, politics, or sexual orientation. On many occasions, they pushed both sides of divisive issues.
“Mass shooting occurs even in #GunFreeZones so people is the problem not guns #Prayers4California” ( @micparrish, December 3, 2015)
“mass shooting wont stop until there are #GunFreeZones #Prayers4California” (@LazyKStafford, December 3, 2015)
It is vital to recognize this factor to end the partisan perception that Russian influence operations focused on one side of the political spectrum. Focus shifted over time or at specific moments based target audience.
The Russian trolls often chose targets of opportunity, especially elections and terrorist attacks, in their attempts to interfere in local politics. This included promoting anti-Islam hashtags after the Brussels terror attacks, a pro-Leave hashtag on the day of Britain’s Brexit referendum, and leaks targeting French President Emmanuel Macron before his election.
These opportunistic attacks had little to no discernible impact on the target populations’ political behavior, indicating the limitations of online troll operations.
Both troll operations evolved, apparently through a process of trial and error in content and messaging. Their activities in 2014 were different from their activities in 2018.
Countermeasures will have to take further evolution into account.
7. Low Impact
Other than in the United States, the troll operations do not appear to have had significant influence on public debate. There is no evidence to suggest that they triggered large-scale changes in political behavior, purely on the basis of their social media posts.
The Russian and Iranian troll farm operations show that American society was deeply vulnerable, not to all troll farm operations, but to troll accounts of a particular type. That type hid behind carefully crafted personalities, produced original and engaging content, infiltrated activist and engaged communities, and posted in hyper-partisan, polarizing terms.
Content spread from the troll farm accounts was designed to capitalize on, and corrupt, genuine political activism. The trolls encapsulated the twin challenges of online anonymity — since they were able to operate under false personas — and online “filter bubbles,” using positive feedback loops to make their audiences ever more radical.
The positive conclusion of this is that the trolls were less effective than may have been feared. Many achieved little or no impact, and their operations were washed away in the firehose of Twitter. The negative conclusion is that the most effective Russian trolls used exactly the techniques which drive genuine online activism and engagement. That made it much harder to separate them out from genuine users. It will continue to do so.
Identifying future foreign influence operations, and reducing their impact, will demand awareness and resilience from the activist communities targeted, not just the platforms and the open source community.
Ben Nimmo is Senior Fellow for Information Defense at the Atlantic Council’s Digital Forensic Research Lab (@DFRLab).
Graham Brookie is Director and Managing Editor at @DFRLab.
Kanishk Karan is a Digital Forensic Research Assistant at the Atlantic Council’s Digital Forensic Research Lab (@DFRLab).
Follow along for more in-depth analysis from our #DigitalSherlocks.