Battle of the Botnets
President Trump’s fans and foes go head-to-head with Twitter bots
One of the most unsettling trends in the American political debate is the distortion of discussions on social media by automated accounts. This post details one such discussion on Twitter, and the types of account that distorted it.
On July 20, The New York Times (NYT) published an editorial calling U.S. President Donald Trump his “own worst enemy.” The phrase quickly began to trend on Twitter, provoking accusations that the platform itself was “colluding” with Trump’s opponents.
@DFRLab investigated the Twitter traffic, to see whether there are any signs that it was manipulated, especially by automated “bot” accounts, which can retweet other users’ posts without any human intervention.
The analysis shows that traffic around the phrase “Trump is his own worst enemy” was repeatedly distorted by clusters of automated accounts of various sorts.
One reason the phrase trended for hours was because user both for and against President Trump argued about it — highlighting the unintended consequences of attacking a topic.
Traffic on the phrase was moderate, reaching some 7,500 tweets during the day. Nonetheless, it makes an excellent case study in the different sorts of bots, and the ways in which they can distort the way humans discuss issues on social media, in this case Twitter.
The Battle of the Botnets began at 07:23:19 UTC (03:23:19 EST) on July 20, soon after The New York Times published its article online, and almost twelve hours before The New York Times tweeted about it.
In the dark of the New York night, a slew of Twitter accounts began tweeting an identical message: the text “‘Trump Is His Own Worst Enemy’ by CHARLES M. BLOW via NYT.”
In the space of 17 minutes, 50 different accounts tweeted the exact same wording; this screen grab from a machine scan shows a sample:
The use of the phrase “via NYT” indicates these posts were shared directly from the New York Times website.
Many of the accounts also included the telltale URL shortener “ift.tt” in their tweets. This is a service run by a company called ifttt.com, which allows users to automate their posts with an “if this, then this” routine — for example, “If a post contains a certain hashtag, retweet it.”
The combination of “via NYT” and the ift.tt web addresses suggests that these posts were automatically generated using a routine from ifttt.com — effectively, “If the New York Times posts an article, share it direct from the website”.
However, these accounts cannot be characterized as an anti-Trump botnet. They appear automated to tweet anything that various news outlets — not just the New York Times — publish, regardless of the actual content, from restaurant reviews to sports reports and eBay commercials.
The most obvious examples are @Buuqye_Ziiqpu and @anthony7289u, each of which shared the post at 07:25 UTC, a few seconds apart. These are the most active Twitter accounts the author of this article has ever seen.
@anthony7289u was created on May 20, 2017, and by July 26 had posted 105,000 tweets — mostly shares via a mixture of websites, with Fox News, The New York Times, and eBay prominent among them. This equates to an average of 1,590 tweets per day, an inhuman rate of posting.
@Buuqye_Ziiqpu shared a similar style of posts, but was even more active. Created on June 12, 2017, it had tweeted 74,200 times by July 27, an average of 1,638 posts per day.
These are obvious bots, but they are not political bots; their main activity appears to be advertising eBay products.
Other apparent bots in this first wave of amplifiers appeared to have more specialized tastes. @Gaz_catt, the very first to post Blow’s article, shares articles from the New York Times, hbr.org, entrepreneur.com, and, unusually, twtd.co.uk, a fan account for UK soccer team Ipswich Town. In a separate scan of tweets posted from July 13 to July 27, this account posted an average of 49.9 tweets per day, all of them, apparently, shares from news sources.
The second account, @hershelsiv, shares the New York Times, the Boston radio station WBUR.org, and U.S. public broadcaster NPR. As of July 26, all its most recent posts were shares via its usual websites, without comment. A machine scan of its tweets from July 13 to 27 shows that it posted an average of 212 times per day, the vast majority of these from news sites.
A cluster of these early posters shared the name “@RichTVX” plus the name of a country (“@RichTVXIndia,” “@RichTVXRussia,” or “@RichTVXLebanon,” for example). These largely share New York Times posts and Fox News alerts, often with little or no content relevant to the country name.
However, the “India” variant shared stories from The Times of India as well.
Overall, these bots do not appear to have had any political motivation. Some largely focused on the New York Times; others shared a wider variety of outlets. None, however, added their own comments to the articles shared (other than one that added the phrase “Hey, look at this article” to each post), or appeared to select the articles according to the subject matter.
Each one only tweeted the NYT article once. Their only effect, therefore, was to create a small spike in Twitter traffic in the early hours of the U.S. morning.
Anti-Trump bots: The October net
At 09:58 UTC, however, the editorial’s author, Charles M. Blow, tweeted his article to his 371,000 followers:
This was rapidly amplified, with the first retweet coming in less than a minute. This is fast, but not unusual, given the number of followers Blow has.
A more unusual move began half an hour later, at 10:20 UTC, when 16 accounts retweeted Blow’s post in the same minute. It is no coincidence that they did so simultaneously: these accounts regularly retweet the same posts at the same time, and many of them were even created on the same day.
On July 26, the accounts in this network retweeted, in the same order, the same articles from shareblue.com, crooksandliars.com, and dailykos.com.
This network can be thought of as the “October group.” Six of them were created on October 21, 2016, and three more were created on October 23. In the machine scan from July 13 to 27, they only posted small numbers of tweets (between 150 and 300), but over 99 percent of these were either retweets or simple shares from websites, and all were anti-Trump.
This has every indication of being a botnet created in late 2016 in order to amplify anti-Trump messaging. As the timeline given above shows, its presence caused a distinct spike in the overall traffic when they engaged.
Send in the cyborgs
Over the following hours, traffic increased, consistent with the east coast of the United States coming online. Much of this traffic appears to have been organic, with minor peaks and troughs in activity, but with a general upward trend.
At 11:07 UTC, online monitor Trendinalia tweeted that the NYT piece had begun to trend, with 244 users and 246 tweets, 222 of them retweets.
The next spike came at 12:19, when user @prisonplanet (real name Paul Joseph Watson), an editor of the U.S. alt-right site InfoWars, which has over 600,000 followers, accused Twitter of “collusion” with the New York Times to make it trend. Watson’s post caused an immediate surge in traffic.
This need not be surprising, given the size of his following; however, analysis shows that the most significant spike — a jump at 12:20 — was supported by automated accounts.
One of the accounts to retweet Watson at 12:20 was @3XT1. Created in April 2010, this account had posted 283,ooo tweets by July 26, at an average rate of 107 per day. As of July 26, every single one of its most recent posts was not just a retweet, but a double retweet, once with the word “Retweeted” added at the start.
In the July 13–27 scan, this account posted 4,525 tweets in two weeks (323 a day on average), and 98% were retweets. The other 2% appeared to be authored posts. This suggests that the account is a cyborg, largely automated to amplify pro-Trump messaging, but with an account holder who periodically engages in person.
Much the same can be said of @ResidentofFL, another of the accounts which retweeted Watson at 12:20 and which repeatedly shares pro-Trump material. Between July 13 and 27, this account posted 5,360 tweets (or an average of 383 per day); 96% of them were retweets.
The other 4% were authored posts, suggesting that this, too, is a cyborg.
Yet another account to retweet Watson at 12:20 was @DRicebowl. Created on October 20, 2016, it had posted 44,500 tweets, and 120,000 likes, by July 26, the great majority of these pro-Trump. This equates to 160 tweets, and 430 likes, per day. In the two-week scan in July, over 99 percent of its posts were retweets. This is botlike behavior.
Accounts such as these appear to have contributed significantly to the spike at 12:20 UTC. As such, they also helped to keep the phrase “Trump is his own worst enemy” trending. However, they do not appear to belong to a single network, as the October group do; rather, they seem to share a common agenda: amplifying similar material.
The final spikes
Traffic saw one more major spike before it began to decline. This was around 13:00 UTC (09:00 EST).
Three tweets drove the majority of the traffic. In chronological order, they were a post from Blow at 12:59 UTC saying that his article was trending; a post from CNN commentator Keith Boykin at 13:04 praising Blow’s article; and a post from online gambling account @every1bets at 13:12 implicitly criticizing the article.
This time around, Blow’s tweet appears to have spread more organically. Most of the accounts that retweeted it early on have human behavior patterns, with a relatively low rate of posting and a high proportion of authored tweets and replies — generally signs of human behavior.
Some do show some bot-like tendencies — notably, for some accounts, retweets make up 90 percent of their posts — but their rate of posting is low, and they do not appear to have much content in common with each other, making the identification of a bot or botnet ambiguous.
Much the same can be said of the amplifiers of Boykin, who has 81,800 followers. A few of his earliest retweeters have some botlike characteristics, with over 90 percent of their posts being retweets, all anti-Trump. However, the majority of the accounts that retweeted him have human behavior patterns.
In the case of these two posts, the bulk of activity appears to have been organic, and consistent with the time of day (around 9:00 EST) and the size of their followings. Automated accounts may have played some role, but do not appear to have greatly distorted the traffic.
The outstanding account is @every1bets. The vast majority of its posts concern gambling (not surprisingly); despite this, its political tweet was retweeted 186 times in a little over six minutes, or approximately once every two seconds, causing the highest spike in traffic.
All the earliest accounts to retweet @every1bets show remarkably similar behavior patterns. All post in a mix of English, French, and Arabic, with some also adding Chinese and Spanish. All their content appears to be exclusively retweets, largely of a commercial nature, and they repeatedly share the same content.
This is classic botnet behavior — more specifically, the behavior of a commercial, international botnet. Such behavior highlights an online economy, not specific to any country, that can monetize trending conversations on social media, as @DFRLab has reported.
The surprising feature is that, on this occasion, the bots turned to a political subject, rather than the usual commercial ones; this may suggest that the amplification of the @every1bets tweet was paid for.
The Twitter traffic around the phrase “Donald Trump is his own worst enemy” is a case study in the use and behavior of bots. It illustrates both the different types of bots in circulation, and the way in which they can distort online debate.
The traffic began with a number of newsbots, whose only purpose appears to be to share various news sources. These are generally neutral bots; their effect in this case was to create a small early spike in traffic, without having any impact on subsequent traffic, which dropped back to almost zero.
The traffic began to grow organically some hours later, but was distorted by the intervention of the “October bots,” a network of political influencers amplifying anti-Trump messages. Less than an hour after these bots engaged, the hashtag was trending — probably due, in part, to their intervention.
Watson’s response to the trending message triggered another spike, boosted by a number of accounts with cyborg tendencies. Unlike the “October group,” these appear to have acted independently, their preferred mode of operation being rapid retweeting rather than sharing stories online. Ironically, their intervention helped keep the phrase they attacked trending.
Finally, the @every1bets group showcased yet another different sort of network. These accounts clearly belong in a single group, but do not appear to have a political focus; rather, they seem to have a commercial focus.
What is most striking is the sheer number of opposing bot groups involved in this one incident. The NYT story was amplified, in turn, by newsbots, anti-Trump bots, pro-Trump cyborgs, and a group of commercial bots — all in the space of a few hours and a few thousand tweets. Their interventions repeatedly distorted the Twitter conversation, as the spikes in the timeline demonstrate.
Thus the key lesson of the “Trump is his own worst enemy” incident is the extent to which automated accounts are shaping traffic — and, potentially, trends — in the U.S. political debate online.
Ben Nimmo is Senior Fellow for Information Defense at the Atlantic Council’s Digital Forensic Research Lab (@DFRLab).
Follow along for more in-depth analysis from our #DigitalSherlocks.