Sock Puppets, by Jodi Michelle (CC BY-SA 2.0)

Sockpuppets and Spambots: How states manipulate social networks

Fake accounts aren’t just for scammers and advertisers, sometimes they can also be effective censors

Published in
8 min readFeb 5, 2016

--

If you’re a user of a major social network, chances are at some point in time you’ve been friended, followed or mentioned by a suspicious-looking account. They probably tried to sell you something or get you to click a link, and you were probably able to spot the fake and avoid the trap: it’s not tough to avoid eggs on Twitter spitting out nonsense content or bikini-clad strangers who add you out of the blue on Facebook. While you have likely seen these kind of accounts before, what you may not know is the scale of the social media fakes: Facebook reportedly has around 170 million known fake accounts, and Twitter may have as many as 20 million fakes. Whole industries exist around creating and selling such accounts.

Even more interesting is that fake accounts aren’t just the domain of scammers and advertisers: increasingly governments and state actors are also actively investing in sockpuppets and spam bots.

Hold up: what’s a sockpuppet? Now is a good time to look at some simple definitions of key terms so that any jargon makes sense. Wikipedia is a good place to start, but it’s worth mentioning that these are general definitions — all of these terms can be applied to different scenarios, but for the purposes of this post we’re only going to look at them in the context of social networks.

sockpuppet: A sockpuppet is an online identity used for purposes of deception. The term, a reference to the manipulation of a simple hand puppet made from a sock, originally referred to a false identity assumed by a member of an Internet community who spoke to, or about, themselves while pretending to be another person. The term now includes other misleading uses of online identities, such as those created to praise, defend or support a person or organization, to manipulate public opinion, or to circumvent a suspension or ban from a website.

spambot: A spambot is a computer program designed to assist in the sending of spam. Spambots usually create accounts and send spam messages with them. Web hosts and website operators have responded by banning spammers, leading to an ongoing struggle between them and spammers in which spammers find new ways to evade the bans and anti-spam programs, and hosts counteract these methods.

astroturfing: Astroturfing is the practice of masking the sponsors of a message or organization (e.g., political, advertising, religious or public relations) to make it appear as though it originates from and is supported by grassroots participant(s). It is a practice intended to give the statements or organizations credibility by withholding information about the source’s financial connection. The term astroturfing is derived from AstroTurf, a brand of synthetic carpeting designed to resemble natural grass, as a play on the word “grassroots.” The implication behind the use of the term is that there are no “true” or “natural” grassroots, but rather “fake” or “artificial” support, although some astroturfing operatives defend the practice (see Justification).

In recent years, sockpuppets, spambot armies and astroturfing have become valuable tools for governments all over the world to influence public opinion, stifle dissent, and spread misinformation. In researching the topic, we found reports of social network manipulation in some form by governments all around the world, including USA, Russia, China, Mexico, Syria, Egypt, Bahrain, Israel, and Saudi Arabia. This is by no means an exhaustive list (if readers have more suggestions, please add links as comments) and the extent and strategy of manipulation varies greatly.

To get an idea of what is possible, let’s look at two very different examples, observed in Mexico and Russia.

Mexico: El Ataque de los “Peñabots”

In Mexico, people have been using hashtags on Twitter to call attention to important political issues. Hashtags like #YaMeCanse and #SobrinaEBN quickly gather momentum and “trend” on Twitter, serving both as a high profile space to draw attention to public figures’ misdeeds and as a space for organizing.

Not long after the hashtags trend, however, something strange happens:

Thousands of users flood the hashtag posting nonsense content or, in other cases, content that is suspiciously similar:

Image via: http://revolution-news.com/tracking-the-mexican-botnet-connecting-the-twitterbots/

These are hashtags that have come under attack. While it’s difficult to say with certainty who or what is behind the attack, the intent and effect is clear: spamming hashtags en masse triggers Twitter’s anti-spam measures, dropping the hashtag from Trending Topics and making it very difficult for people to discover or meaningfully participate in the hashtag.

At the same time as real hashtags sink under the weight of spambots, other hashtags appear, pushing the critical hashtags further from public view. Researchers Alberto Escorcia and Erin Gallagher (both of whom have written extensively on this and other techniques observed on Mexican social networks) note that these new hashtags are fake, and are flooded with bot participants. Network analysis shows the clear difference between real hashtags (in this case #Acapulco) and fake hashtags (in this case #SoyAmanteDe, which appeared shortly after #Acapulco was attacked by bots):

A real hashtag (#Acapulco) — notice the number of connections between nodes. Images from: http://revolution-news.com/tracking-the-mexican-botnet-connecting-the-twitterbots/
A bot hashtag (#SoyAmanteDe) — notice the lack of networked activity.

These fake accounts that flood hashtags have become known as “Peñabots” — a popular reference that connects these spambot armies to Mexican president Enrique Peña-Nieto.

Russia: Social-savvy Sockpuppets

Over the past 12 months or so, there have been numerous reports written about sockpuppets and bot armies with seemingly strong links to the Russian government. From Lawrence Alexander’s excellent analysis and tracking of Russian botnets to Adrian Chen’s in-depth look at the activities of one so-called “troll factory” in St. Petersburg, it’s clear that there is much to be studied.

From Chen’s work, there emerges an interesting counterpoint to the bot-driven hashtag attacks observed in Mexico (as well as in Russia and elsewhere): elaborate misinformation campaigns that require significant human time and effort.

Chen tells of the panic and confusion in the Louisiana town of St. Mary Parish when, on September 11, 2014, a local official receives a text message informing him of an explosion at one of the town’s chemical processing plants, Columbian Chemicals, and the leaking of toxic fumes. On Twitter, word of the explosion was already spreading:

“A powerful explosion heard from miles away happened at a chemical plant in Centerville, Louisiana #ColumbianChemicals,” a man named Jon Merritt tweeted. The #ColumbianChemicals hashtag was full of eyewitness accounts of the horror in Centerville. @AnnRussela shared an image of flames engulfing the plant. @Ksarah12 posted a video of surveillance footage from a local gas station, capturing the flash of the explosion. Others shared a video in which thick black smoke rose in the distance. Dozens of journalists, media outlets and politicians, from Louisiana to New York City, found their Twitter accounts inundated with messages about the disaster.”

In addition to screenshots of the news on CNN — reporting a possible attack by ISIS — shaky video emerged of an Arabic-language TV channel broadcasting an apparent claim of responsibility.

In addition to screenshots of the news on CNN — reporting a possible attack by ISIS — shaky video emerged of an Arabic-language TV channel broadcasting an apparent claim of responsibility.

Spot the dodgy accent and, if you look closely, the disjointed Arabic on the screen: a poorly rendered fake.

As Chen notes, before presenting other similar examples:

The Columbian Chemicals hoax was not some simple prank by a bored sadist. It was a highly coordinated disinformation campaign, involving dozens of fake accounts that posted hundreds of tweets for hours, targeting a list of figures precisely chosen to generate maximum attention. The perpetrators didn’t just doctor screenshots from CNN; they also created fully functional clones of the websites of Louisiana TV stations and newspapers. The YouTube video of the man watching TV had been tailor-made for the project. A Wikipedia page was even created for the Columbian Chemicals disaster, which cited the fake YouTube video. As the virtual assault unfolded, it was complemented by text messages to actual residents in St. Mary Parish. It must have taken a team of programmers and content producers to pull off.

Sockpuppets, Censorship and Source Verification

Beyond understanding how networks can be manipulated in the ways presented in the examples, it’s important to think about the wider impact of such interventions and what we can do to challenge them.

Reading through the example of network manipulation in the Mexican context, answering the question “why” is simple: hashtags are attacked to limit the public’s ability to hold public figures to account. This is censorship. Other bot hashtags are created to smear individuals and movements. This is disinformation.

Asking the same question of the Russian-context example, however, is less straightforward. Why go to such great lengths to create, coordinate and promote this kind of fake news? Ethan Zuckerman, Director of the Center for Civic Media at MIT, frames these manipulations as “a new chapter in an ongoing inforwar,” the latest in a long line of efforts by states to produce news that reflects their viewpoint and strategic priorities.

Who benefits from doubt? Ask instead who benefits from stasis. […] It’s expensive to persuade someone to believe something that isn’t true. Persuading someone that _nothing_ is true, that every “fact” represents a hidden agenda, is a far more efficient way to paralyze citizens and keep them from acting. (Ethan Zuckerman, 2015)

Elaborate disinformation campaigns, then, seek to undermine the credibility of social networks themselves — the very spaces where criticism of a government or public figure happens become the target of suspicion and doubt.

It’s worth noting that this skeptical mindset is something that groups working in verification have advocated for a long time, and it is perhaps the nurturing of this mindset and digital media literacy that holds the key to derailing disinformation campaigns. Any of the Checkdesk partner network or First Draft Coalition would quickly have found flaws in the Columbian Chemicals charade, through analysing the sources of the disinformation and problematic content.

In the coming weeks and months we’ll be conducting ongoing research into this topic area, looking specifically at what tools and techniques can be used to conduct source-level verification and tell the bots from the nots and the sockpuppets from the genuine accounts.

--

--

Founder & COO @fathmco working on creative solutions to emergent challenges in journalism // Innovation through collaboration @popupnews