On August 24, an ostensibly Russian Twitter account with just 74 followers named “Lizynia Zikur” (handle @kirstenkellog_) posted an angry tweet attacking U.S. news website ProPublica as an “alt-left #HateGroup and #FakeNews site.” Within hours, the post was retweeted over 23,000 times. A second account followed up with a similar attack the following day.
Analysis shows that both attacking tweets were retweeted massively because they were amplified by a large, and probably rented, network of automated “bot” fake accounts, origin unknown.
The account which posted the original tweet was followed by accounts which started out in Russian-language; it may well have been of Russian-language origin itself. The account which posted the second tweet posed as a Russian-speaker, but seems to have used Google Translate to do so.
The major amplification was conducted by botnets whose primary purpose appears commercial. Their origin cannot and should not be attributed to any one group without further evidence.
English tweet, Russian account
The article which likely triggered the attack was one in which ProPublica assessed Russian and alt-right activity after the Charlottesville riots. Headlined “Pro-Russian bots take up the right-wing cause after Charlottesville,” the article drew, in part, on @DFRLab’s research.
ProPublica therefore shared a screenshot of the tweet with @DFRLab.
The first question is the identity of the “Lizynia” account, and whether it is likely to originate from the Russian or English speaking world. Since the account reacted to an article criticizing both the alt-right and Russia, either is theoretically possible.
The tone of its (English-language) tweet was characteristic of the alt-right; the language of the account itself is Russian, and tagged to the town of Bagrationovsk, in Russia’s Kaliningrad region, near Poland and Lithuania.
There is no mechanism which would allow us to verify the location or identity. A reverse search of the profile picture did not return other results; a Google search for the name only returned copies of the same tweet.
One slight clue lies in the phrase “ProPublica is alt-left #HateGroup.” This is a non-native rendition: it omits “an”, which is a characteristic failing of Russian speakers, but also of other language groups. However, such phrasing is easily faked; it is too slender a thread to hang a conclusion on.
One thing which can be said is that the “Lizynia” user is singularly reticent. As of August 24, 2017, the account posted twelve tweets in its almost three-year existence; only one showed up on the profile, indicating that the others were already deleted. This is, in itself, curious; it also reduces the amount of evidence available.
Meet the “B” team bots
More can be deduced about Lizynia’s followers. Most appear to be automated “bots” in a small network, pre-programmed to amplify online messages; and this network does appear to originate in the Russian-speaking world.
@DFRLab viewed the account’s follower page before it was shut down, revealing a curious pattern: with one exception (an account which followed “Lizynia” after its final tweet), every single one of the 76 followers had a surname beginning with A, B or C, mostly B.
The follower page has been archived, preserving some of the names: Marly Brideaux, Demi Bangs, Ona Buesnel, Jos Blofeld, Cilla Backshill, Juhana Blowin, Justinas Blunsom, Julijana Bloy, Hyacinth Bigby, Manel Breeton, Katlyn Boich, Eleanore Batch, Cedar Augie, Silas Cafe, Kelleigh Bollum, Jacinda Blackley and Kelebek Bollon.
These accounts have a number of features in common. They were created in 2014; they claim to be located in the United Kingdom; they only posted a few dozen times, but followed hundreds of accounts. Their handles do not match their screen names.
The image from “Marly Brideaux” matches that of a Russian post shared on Pikabu.ru, the Russian equivalent of Reddit, in 2014. The use of a photo located elsewhere online is a frequent sign of a fake account:
Another of the group, “Kelebek Bollon,” appeared to post a phenomenal 1,610 tweets in four hours as this article was being written:
More importantly, these accounts post very similar content, heavy on emojis; many of their tweets share posts from a series of Twitter accounts beginning with the letters “@Con”, most of which are already blocked:
Their posts sometimes share identical wording, but with different emojis:
Taken together, these factors indicate that the accounts — all of which, it will be remembered, followed “Lizynia” — are part of a small and relatively lazy botnet, automating posts to amplify other, English-language accounts.
However, some of these apparent bots appear to have had a prior career, such as “Justinas Blunsom”, which was also created in 2014 (albeit late in the year), tweets rarely, follows hundreds of accounts and claims to be based in the UK, this time in Inverness, Scotland.
While its most recent tweets follow the pattern of those outlined above, there is a distinct gap in its activity. From January 3, 2017, its posts amplify the “@Conc” group. Until January 26, 2015, however, it posted in Russian:
Rather than emojis, it posted text-only comments on Ukrainian and Russian celebrities and politics, such as the following:
The first post appears to share, word for word, a headline from Ukrainian website 112.ua on the Ukrainian state’s general election of 2014. The second appears to share, again word for word, a headline from Russian blog ermakinfo.wordpress.com. Other posts concerned the “South Stream” gas pipeline, the U.S. military commitment to Poland, Ukrainian celebrities Dasha Astafieva, and Russian-Lithuanian star Kristina Orbakaite. All appeared to reproduce news headlines from a range of Russian-language sources.
The same applies to “Julijana Bloy”, whose interval between Russian tweeting and English tweeting lasted from April 2014 to January 2017:
Each account follows around 1,000 others, and they, too, are predominantly Russian-language:
This is not conclusive, but does suggest that the “Lizynia” account is supported from, and may have originated from, the Russian-speaking world.
However, the main amplification of “Lizynia’s” tweet came from a different source. That amplification was astonishingly high: “Lizynia” had just 74 followers at the time of posting, and was not following a single account. Racking up over 23,000 retweets in a few hours is phenomenally unlikely.
@DFRLab therefore began looking at the accounts which retweeted “Lizynia’s” post.
In this case, a rapid look was sufficient to establish the certainty that the tweet was being boosted by bots. Three of the accounts which retweeted it most recently were attractive blondes named “Shelly Wilson”, “Bernadette White”, and “Julia James.” “Shelly’s” Twitter handle at least resembled its name (@ShellyW38433328), albeit with a random string of numbers appended, but both “Bernadette” and “Julia” had handles made up of random alphanumeric strings (@KDpdX3QORYWWt5b and @4yML2iZDKEdpPJ0 respectively).
This is a classic sign of a large-scale botnet, in which the naming of fake accounts is automated by a random generator.
However, the damning factor was that this was not a case of three accounts portraying three beautiful blondes: it was a case of three accounts portraying the same beautiful blonde, with one of the images flipped.
All three were created in June and July 2017. All three posted almost exclusively retweets. All three featured, as their pinned tweet, salacious images of brunettes, with an invitation to a Google-shortened URL.
The only reasonable conclusion is that these are all fake accounts, set up to amplify other users’ tweets.
The same can be said for many of the other accounts in the series. Compare, for example, the profiles of “Denise Miller” (@DeniseM57732396) and “Ella Russell” (@ak1PrNTEK01VfMA):
Note the identical picture (with a slightly different shade), the creation dates five days apart, and the pinned tweet.
Compare also the profiles of “Wendy Dowd” (@JWPCWUfivsNbeGI), “Abigail Mackenzie” (@Nmng8dUr20mbN9g) and “Audrey Russell” (@QnNFNCLNf4A6FKZ):
This shows exactly the same technique of creation: an image which is retouched or flipped, the use of plausible screen names with alphanumeric handles, and a creation a few days apart. The very high likelihood is that all these fake accounts were created by the same person or group as part of a single network.
Not all the amplifiers of “Lizynia Zikur’s” tweet had female images. Male amplifiers included “Richard Campbell” (@hiiziqont):
As of August 24, this account had posted just 45 tweets. Of those, 44 were retweets; the only exception was its first post, a message of devotion to its boyfriend, whom it called “babe”:
This tweet is word for word the same as one posted by (female) user @kaciehcl85 in 2016:
Moreover, “Richard’s” image has been used for at least two other Twitter accounts, set up at the end of 2014 and the beginning of 2015:
Again, every indication is that “Richard” is a bot.
The account “Jabari Washington” (@noje1990) also had a male avatar; it also only posted one original tweet, its first, all the rest being retweets; and that tweet also matched the wording of another tweet posted in 2014. Again, this appears to be a bot.
Despite their different apparent characters, many of the accounts in the network posted the same content, such as this:
This is clearly a botnet — a network of fake accounts set up to appear like real human users and retweet others’ content.
Same net, different tweet
The sheer number of retweets which “Lizynia” received gives an indication of the size of this botnet. Another indication comes from a separate tweet which was drawn to @DFRLab’s attention by Twitter user Julia Davis (@JuliaDavisNews), an expert in Russian disinformation and propaganda.
Julia Davis has a little over 20,000 followers on Twitter; her posts tend to gather retweets in the tens or low hundreds. One tweet which was retweeted over 7,000 times in a few hours therefore caught her attention:
The accounts which retweeted her bear an uncanny resemblance to those which retweeted “Lizynia”:
These bots swarmed the retweets, as these screenshots indicate:
Just as before, the accounts were created in the space of a few days, the handles were alphanumeric, and the images were flipped versions of one another. These can only realistically be the product of the same bot factory; and given the number of retweets involved, it is a botnet which, in all probability, numbers thousands of accounts.
Bots for hire?
However, it does not appear to be a primarily political botnet. The “Lizynia” post attacked an article which had implicated pro-Russian accounts in the Charlottesville response; Julia Davis’ tweet exposed Russian military attacks on Ukraine, which is hardly a report pro-Russian bots would be likely to amplify (unless they were operated by a simple algorithm which featured, for example, the command to retweet posts mentioning both #Russia and #Ukraine).
These bots make almost no original posts. “Wendy Dowd” made just four, all cryptic, including the first:
“Abigail Mackenzie” appears to have made just two, of which this was the first:
Other retweets from the network covered a wide range of issues, as these screenshots from “Amelia Gibson’s” timeline demonstrate:
They also spanned (or spammed) a wide range of languages, as “Abigail Mackenzie’s” timeline shows:
The likelihood is that this is a commercial network, a set of “bots for hire”, either bought by an unknown user to amplify posts of interest to them, or automated in such a way that they automate random, multi-lingual posts — or both.
However, there is no organic indication of its origin. The accounts are recent creations; their linguistic mix is so broad that it is not possible to draw a conclusion about the origin. The original posts are few, cryptic, and uninformative. Their behavior appears consistent with a retweets-for-hire or follows-for-hire network, and its geographical origin cannot be deduced from the available open-source information.
The next round
ProPublica reported the Twitter attack on them, and the “Lizynia” account was suspended. Some hours later, however, another attack, again ostensibly Russian, renewed the attack:
This account is older than “Lizynia’s”, having been created in March 2012, but it is just as idle, having only posted six tweets in its career, all of them on August 25:
This account targeted ProPublica and its staff with a series of aggressive tweets in a mixture of (apparent) Russian and English:
The account is tagged to Kaliningrad; the phrase “South will rise again”, another reference to the American Civil War, has the same blindness to the grammatical article as has been seen elsewhere. However, one significant factor, which includes graphic language, suggests that this is not a Russian account, but an account masquerading as a Russian.
The post “У меня 1 миллион ботов” is grammatical Russian for “I have 1 million bots.” However, the post immediately above it — “удар мой член” — is gibberish in Russian, combining the noun for a strike or blow, удар, with the masculine nominative for “my member”.
When the phrase is pasted into Google Translate, however, it emerges as “blow my dick”, with no indication that the word “blow” is not a verb, but a noun. And when the search is reversed, and the term “blow my dick” is entered into Google Translate, it comes out as “удар мой член”.
It therefore appears probable that the person behind the “Victor Thawnzgauk” account is an English-speaker using Google Translate to pose as a Russian, not a genuine Russian.
As with the “Lizynia” tweet, this was grossly over-amplified, by over 12,000 accounts. While not as obviously belonging to the same network as the earlier group, they demonstrated the same behavior, posting in multiple languages on a wide selection of themes. They were also new (many created in late June), and retweeted and liked the attacking tweet in almost exactly the same order, a classic symptom of automation:
Every indication is that this is another botnet, being switched on to harass ProPublica. There has been a general alignment of narratives between the alt-right and pro-Kremlin commentators, meaning that a Russian identification cannot be ruled out; however, there is no reliable evidence which would confirm such an identification.
The tweet which attacked ProPublica may well have originated from the Russian-language world. The account which posted it, now suspended, was presented in Russian, and followed by a network of accounts which began their lives posting in Russian. (It should be noted that this is not the same as originating in Russia: their early tweets concerned events across the former USSR, not just in Russia).
While such features can be counterfeited, it is possible that this part of the Twitter attack came from Russian-speakers.
The “Victor” account, however, appears to be masquerading as Russian, via Google Translate. This may, of course, involve a degree of double-bluff, but it would be unwise to term this account “Russian” on the basis of the available evidence, but rather “pseudo-Russian”.
The tweets from both “Lizynia” and “Victor” were amplified by a major bot effort. These networks are newer, more active, less subtle, and larger. While their bot nature is clear, their origin is not. It is reasonable to suspect that the “Lizynia” account came from the Russian-speaking world, but the botnets which amplified it and “Victor” appear to be commercial in origin, not political, and their geographical origin is questionable.
Ben Nimmo is Senior Fellow for Information Defense at the Atlantic Council’s Digital Forensic Research Lab (@DFRLab).