Sockpuppets, Secessionists, and Breitbart

How Russia May Have Orchestrated a Massive Social Media Influence Campaign

Jonathon Morgan
Data for Democracy

--

By Jonathon Morgan and Kris Shaffer, with support from C.E. Carey, Wendy Mak, and Alex Amend

While the FBI looks into the Trump campaign’s ties to Russia and the role of far-right outlets in manipulating the media ecosystem, the Senate Intelligence Committee investigates Russia’s use of paid trolls and bots, and former high-ranking Trump administration officials offer testimony in exchange for immunity, new evidence points to a highly orchestrated, large-scale influence campaign that infiltrated Twitter, Facebook, and the comments section of Breitbart during the run up to the 2016 election. Tens of thousands of bots and hundreds of human-operated, fake accounts acted in concert to push a pro-Trump, nativist agenda across all three platforms in the spring of 2016. Many of these accounts have since been refocused to support US secessionist movements and far-right candidates in upcoming European election, all of which have strong ties to Moscow and suggest a coordinated Russian campaign.

Evidence of Infiltration

Between April 2016, just prior to Trump clinching his party’s nomination, and July of that year, just before Steve Bannon left Breitbart news to become the Trump campaign’s chief executive, the discussion in conservative Twitter communities, the Trump campaign’s Facebook page, and Breitbart’s comment section suddenly and simultaneously changed.

The evidence for this change is in subtle shifts in the comments and tweets posted in each community, which show that all three adopted eerily similar language during the same period of time. Normally, even in different groups with shared beliefs — like conservative communities on Twitter and Facebook — platform constraints, such as Twitter’s 140-character limit, and the endless variations in millions of sprawling conversations by many thousands of different users result in language that is platform- and community-specific.

It’s possible to measure how each community’s language is unique by algorithmically breaking down the content the community puts online. The structure of the sentences in each batch of tweets, comments, or Facebook posts reveals words that the community uses in uncommon ways. For example, the word “Jewish” is normally used to describe religion — so in sentences from mainstream news articles, it’s most often used in a similar way to other words that describe religion, like “Christian” or “Muslim.” But not every online community uses the word “Jewish” for this purpose. In tweets published by followers of the so-called “alt-right,” for example, “Jewish” is instead used in a way that’s more like “satanic” and “homosexual,” because in that community, “Jewish” is an epithet.

The difference between how a word is used in a given online community, compared with how it’s used in mainstream language, is that word’s novelty. Novel words in any community are usually distinct, but in the spring of 2016, the most novel words in four major online communities started to overlap. Instead of many of thousands of unique, individual voices, it was as if one voice became dominant.

All of the roughly 500,000 different words in posts by conservative Twitter users, commenters on the Trump campaign’s Facebook page, and commenters on Breitbart news articles, were ranked on a monthly basis according to their community-specific novelty—put another way, this measured how differently each community used these words compared to mainstream language. Each month, the top 1% most novel words in each community were compared to one another. Even though each community’s novel words were completely different in January, February, and March, in the months of April, May, June, and July, the novel words across all three communities were suddenly in sync.

The words themselves cover a range of predictable far-right topics, like “Milo” (Yiannopoulos, an “alt-right” internet celebrity) and “Warren” (as in Senator Elizabeth Warren), to seemingly innocuous words like “hybrid,” “division,” and “norm.” The fact that these words had high novelty scores in each community at the same time strongly suggests that the sentences were written by a single author, or a group of authors working from a shared messaging playbook.

The Bots and Sockpuppets Impersonating American Conservatives

Two types of accounts attempted to manipulate the conversation in conservative online spaces throughout the 2016 election: automated accounts, also known as “bots,” and human-operated fake accounts, also known as “personas” or “sockpuppets.”

Bots

The Trump campaign’s “bot army” was well documented during the campaign, so it’s not surprising that these communities were besieged with a high volume of bot activity. It is difficult to know exactly which accounts are bots and which are not, but there are some telltale signs, specifically that an account posts messages that are exact copies of messages that they, or other users, already published, or that are identical except for a substituted link or hashtag. Bots also frequently recycle profile images stolen from legitimate users on the web.

Using these criteria, there were at least several thousand bots operating in conservative Twitter communities in 2016. 1,314 of them posted at least one tweet that had exactly identical content to four or more other tweets (not including retweets) in the dataset. These accounts are almost certainly automated, and there are likely many more bots that were less obvious in how they disseminated propaganda throughout the community.

A network of 1,000 bots is very large, but it pales in comparison to the nearly 30,000 bots discovered posting duplicate content throughout the comments on Trump’s Facebook page. While this is only around 2% of the accounts that were active between March and August, 2016, and accounted for only 5% of the content, operating a botnet of this size is typical of what analysts call a “state actor” ― a.k.a., a government.

Sockpuppets

However, while bots were a significant component of the influence operation, and garnered most of the media attention during the campaign, the sudden shift in language during the spring was initiated by fewer than 400 human-operated sockpuppet accounts on Twitter and Facebook, along with 800 commenters on Breitbart news articles. In each community, members of this small group of sockpuppet accounts were the first to introduce new, novel language to the group.

Sockpuppet language was significantly more aggressive in its support for Trump than language from other users in the same community. For example, sockpuppet accounts were six times as likely to compare Hillary Clinton to Adolf Hitler by referring to her as “Hitlery,” and twice as likely to describe Clinton as a “criminal” or bring up her now infamous “emails.” Also, while typical users made liberal use the exclamation mark, averaging almost one per message, sockpuppets were even more enthusiastic, using exclamation points at more than double the rate of normal accounts. For example, messages like “Donald Trump For President!!!!!!!!!!!!!!” (14 exclamation points!) were not uncommon.

These sockpuppet accounts were also more active than regular users, averaging roughly twice as many posts per day, but still posted 10% fewer posts per day than bots.

More telling, is that most Twitter sockpuppet accounts were created between 2010 and 2012 (corresponding data on Facebook and Breitbart commenters is not available), but either sat dormant for years, or deleted all their previous tweets before a flurry of activity when the propaganda campaign was launched in March 2016.

According to a government source familiar with information operations doctrine who asked to remain anonymous, seemingly dormant accounts that spring to life with a large number of messages are most likely purchased. Analysts’ assumption is that operators of bot networks like older accounts bots because they’re less likely to be algorithmically banned by their chosen platform ― like Twitter or Facebook, which both aggressively police spammers. The theory is that the bot doesn’t have to parcel out activity, like retweets, replies, and follows, in order to appear more human and avoid detection. Instead they can immediately participate fully in a community without navigating the complex process of re-authorizing or recreating banned accounts.

Fortunately for spammers and propagandists, it is surprisingly easy to purchase social media personas for the purposes of orchestrating messaging campaigns that appear to originate from real human beings. Russian sites like BuyAccs (“buy accounts”) sell bots to anyone with enough digital currency to pay for it (the site accepts “Bitcoin, Perfectmoney, Yandex Money, Kiwi, and about 30 payment systems through Unitpay.ru”). Twitter personas associated with accounts created between 2008 and 2009 can go for less than $1, depending on volume. Bot operators then obfuscate their tactics by running small subsets of persona accounts through a sprawling network of different IP addresses, each masked by software that redirects web requests through a series of intermediate connections called “proxy servers.”

This strategy is so effective that malware toolkits have emerged to more easily build networks of social media bots, and ultimately monetize their use. One such toolkit, Linux/Moose, is rarely discussed but well known to network security companies. It, and others like it, may have been used during the 2016 elections, and will make it easier for less sophisticated organizations to mount these types of attacks in the future.

These tactics and technologies make it incredibly difficult for social media platforms, internet service providers, and law enforcement to connect sockpuppet accounts to their human operators. So difficult, in fact, that reportedly no agency within the US government has the technical capability and accompanying authority to detect or defend against these types of influence operations. According to analysts close to the problem inside government, “Nobody knows, and help is not on the way.”

The Sockpuppet Agenda

The large majority of both sockpuppets and bots on all three platforms were emphatically supportive of Donald Trump’s candidacy. The sentiment expressed in the following messages was common among sockpuppet accounts:

“We love you and are behind you all the way.. Defeat crooked Hillary and end corruption in Washington !!! God Bless Trump.”

“TRUMP IS PATRIOT LOVE PEOPLE ,LOVE USA!HE HONEST,HE WILL FIGHT FOR EACH VOTE COUNT ,HE WILL NEVER,EVER SURRENDER TO CORRUPT LIAR SOCIALIST COMMUNIST HILLARY CLINTON!!!CHRIST HAVE BALL TO ASK TRUMP AND NO TO CROOK CLINTON!!!VERY SHAME!!!”

“keep fight Mr Trump!!we no need this criminal women!!!THE ELECTION IS CORRUPT LIAR BY THE MEDIA AND CLINTON!!!THIS IS NOT DEMOCRACY! IS CORRUPTION!!!!!”

Among bots, the message was more coherent, often with specific calls to action, but similar in theme:

“Never again will we get this chance to vote for a candidate NOT controlled by the establishment!! He can clean up the dump that has become our government,,,BY the PEOPLE ,,,FOR the people…TAKE AMERICA BACK<<<VOTE TRUMP!! WE THE PEOPLE CAN DO THIS!~”

“So the biased media think they can turn us away by attacking Trump from all angles. They don’t realize that instead of making us weaker, they are making us stronger and more resolved to win. Everyone get out and vote and bring along as many friends and family as you can”

“***** ATTENTION TRUMP SUPPORTERS*** Red tide….TRUMP supporters wear a red shirt on election day …take pictures and post them on social media of people standing in lines to vote…optics to fight voter fraud …repost everywhere”

In addition to fervent pro-Trump support, the shift in community discourse driven by these sockpuppet accounts corresponds to a documented increase in anti-Semitic language on all three platforms. Previous analyses, published in the Washington Post, shows a sharp spike in anti-Semitism starting in April 2016. In analyses published by The Atlantic in January 2017, a similar shift was shown to correspond with an uptick in users sharing articles from Breitbart and Infowars. Finally, analyses published by the Southern Poverty Law Center demonstrated the steady rise of anti-Semitism among Breitbart commenters, starting in 2014 and escalating throughout the 2016 election season.

In addition to Trump support and anti-Semitic, nativist rhetoric, both bots and sockpuppet accounts were more likely to discuss Russia than normal users. The bot accounts in particular mentioned Russia four times as often as other users, and were especially interested in countering the narrative that Russia was involved in swaying the election. For example:

“The news media blames Russia for trying to influence this election. Only a fool would not believe that it’s the media behind this”

“THEY ARE GRABBING AT STRAWS TRYING TO BLAME IT ON RUSSIA!”

“Russian want to be friends with the Americans rather than fight! With Love from Moscow !!!”

Secessionist Movements and Russian Influence

In spite of the bots’ insistence to the contrary, many signs point to Russia’s involvement in what looks to be a highly orchestrated influence campaign, conducted by fake social media accounts across multiple platforms. While many individuals and organizations used technology to try and influence public discussion during the election ― from the so-called “patriotic programmers” to fake news writers scamming readers for the advertising revenue ― the consistency of tactics and message across these platforms, and the scale of the operation, suggest a powerful, centralized decision-maker is behind the attack. While a number of countries are capable of mounting such an attack, Russia has openly supported other far-right, populist candidates, has ties to international white supremacist and neo-Nazi movements as well as the American “alt-right”, and is known to use deceptive techniques to disseminate propaganda on social media.

The secessionist connection is meaningful, because the same accounts that injected pro-Trump, anti-Semitic rhetoric into conservative American social media communities also heavily participate in the online networks of US secessionist organizations, which have a long history of explicit Russian support.

Secessionists believe that some US states, namely California or Texas, should leave the union and form independent countries. Traditionally these have been obscure, fringe movements, but Texas lawmakers who support secession nearly forced a vote on the issue at the 2016 Texas Republican Party convention, and there’s been a recent surge in interest for secession from California liberals in the wake of Donald Trump’s presidential victory.

While organizers of both states’ major secessionist organizations claim growing popular support, their movements may be less grassroots than they first appear. In California, the movement is synonymous with Yes California, a separatist organization ostensibly making the case to liberals that the state should secede from Trump’s America. However, the leaders of that organization, Luis Marinelli and Marcus Evans, were both registered Republicans prior to forming Yes California. Marinelli, in fact, was a right-wing activist who lived in Russia for years before moving to California. Researcher Casey Michel has been actively documenting the ongoing relationship between Yes California’s leadership and Russia, including Marinelli’s participation in a secessionist conference hosted in Moscow, and the group’s new “embassy” in that city. That embassy, according to Snopes, is funded by a group associated with the Kremlin.

The connection continues on social media, where Yes California’s message is amplified by many of the same accounts that infiltrated conservative Twitter communities and promoted a pro-Trump, white nationalist agenda. The roughly 430 users who are active in both conservative and Calexit Twitter networks exhibit many of the same characteristics of the pro-Trump sockpuppet accounts. For example, even though the accounts were created mostly between 2012 and 2015, they were either dormant or deleted their previous tweets prior to a spike of activity in late spring and early summer of 2016.

The secessionist sockpuppet accounts were also twice as likely to refer to Hillary Clinton as a criminal and 1.5 times as likely to bring up her email scandal than the larger secessionist network.

While these sockpuppets do occasionally discuss secession, their primary function appears to be promoting the Trump presidency and its agenda.

In these examples, the sockpuppet accounts defend Trump against his perceived enemies in and out of government ― “subversives,” the “left-wing media,” and the so-called “Deep State” ― while also championing the President’s budget agenda.

“Deep State attacks @POTUS nonstop — @newtgingrich says must purge govt of saboteurs & subversives. He joins #Dobbs FBN7p #MAGA #TrumpTrain”

“Cutting government down to size — @POTUS budget making history. @SteveHiltonx joins #Dobbs FBN7p #MAGA #AmericaFirst”

“#LDTPoll: Do you believe the Deep State and the left-wing national media fully intend to subvert the Trump presidency?”

The Corruption of Social Media

A basic assumption of social media is that every account is tied to a single person or group ― someone, or some organization, that we can grow to understand, and even trust. During the 2016 election cycle, that was not the case. Hundreds of people operating fake accounts amplified by tens of thousands of bots ― all seemingly with the same political and ideological agenda ― were effective enough at imitating the behavior of real human beings that they overwhelmed conservative American social media communities. This same network of bots and sockpuppets has now turned to defending Donald Trump’s legislative agenda, attacking his adversaries, and promoting other far-right populist candidates in Europe. And while tens of thousands of bots and sockpuppets can already influence an election, this number seems trivial compared to the truly massive botnets researchers have started to uncover. Take, for example, the enormous “Star Wars botnet,” which is comprised of 350,000 accounts, controlled by a single operator, and currently sits idle, perhaps waiting to be deployed for the highest bidder.

Researchers have shown that the influence of these campaigns is largely unconscious. The human mind is malleable, and human memory is highly suggestible, which makes us especially vulnerable to large-scale disinformation campaigns. With enough repetition, particularly when information comes from a trusted source, claims become familiar. Once an idea is unconsciously familiar, it becomes more plausible, and if that plausible lie is shared by someone in our network, we’re more likely to believe it. On a small scale, a phenomenon colloquially referred to as “gaslighting” ― the process by which an abusive partner manipulates their victim with a slow, steady drip of partial truths, ultimately distorting the victim’s perception of reality. But when coordinated gaslighting is executed on a massive scale, it looks more like psychological warfare.

In the aftermath of the 2016 elections, the discussion has revolved almost exclusively around “fake news” ― as if our media platforms need to be defended against an external enemy. But the enemy is already inside. They behave like real people, and ideological allies. They’re friends and followers of people we know and trust. They’re running a propaganda campaign right next to news about old friends and pictures of our kids.

Evidence of a massive, coordinated disinformation campaign, possibly connected with the Russian government, continues to mount. As described at length in the Intelligence Community Assessment made public in January, the campaign was intended to influence the political landscape, undermine social ties, and increase ideological polarization and distrust. Understanding this campaign and the impact it continues to have on US, British, and European politics is crucial to safeguarding our democracy.

For researchers: this technique of looking at semantic shifts in language over time, is explained in detail by the HistWords project, published by Stanford’s NLP group. The technique of building a network from a small number of seed accounts is explained in a paper I co-authored with JM Berger for the Brookings Institution in 2015.

--

--