#FamiliesBelongTogether Robotic Attack this Week

Screen capture of Twitter Search Earlier this Week

This week bots and cyborgs resurfaced.

These are fully or partly robotic social accounts pretending to be fully human.

Reliably returning as they have before in times of division, distress, or decision, robotic accounts driving disinformation resurfaced this week. Just as they did in Virginia and Alabama’s recent elections, where I did bot detection work for each of the Democratic campaigns there.

It is as predictable as clockwork. I watch for this behavior regularly, but this research is not done for any client.

Immigration Crisis Begins

In these recent weeks we faced a new national trauma — induced by the Trump Administration’s new immigration policies. This policy generated a host of images and audio of children forcefully removed from parents, imprisoned, and worse. Facing a fierce backlash, Trump backtracked a step from his original child separation policy to a new policy of indefinite family imprisonment. All of this inspired a rising counter-wave of protest against Trump’s actions.

I kept a close eye out for bots and cyborgs who might try to manipulate and take advantage of this new national crisis.

This week they did. They had a new target.

An organic and authentically human wave of activity coalesced around the main hashtag #FamiliesBelongTogether. This grew into a key part of the organizing “brand” resisting Trump’s new immigration and prison policies. National gatherings with over 300,000 RSVPs were planned using this hashtag. New t-shirts, posters, and social graphics were created at a breakneck pace. This hashtag became a key and unifying theme.

Hashtags are now critical to the spread of information on Twitter and other social networks, driving shares and trends and views and clicks.

You may have heard of “hashtag hijacking,” when a creator of a hashtag loses control of the discussion, and suddenly the message of that hashtag’s social conversation is dominated by the exact opposite message its creators intended. I expected to see hijacking attempts of #FamilliesBelongTogether — but instead saw a new technique in action.

This time it wasn’t a hijack, it was a decoy attack.

Imagine instead how one hostile sailboat racer crosses over another sailboat and steals its wind. That was what these attacks were like.

Users created hashtags that “optically” looked almost identical to the true hashtag. The near-doppleganger hashtag was #familesbelongtogther. Look closely, as this new hashtag excludes the second “i.” Easy to miss. They also used other variants of misspelling of protest hashtags (#KeepFamilesTogether, etc) but this was the central one.

Why do this?

The robotic accounts goal is to diffuse and pull away attention from the true hashtag. To siphon away its social reach, and lower its results in search and recommendation engines — which is all driven by algorithms watching user interaction. To steal and lessen its social power.

While many actual human beings accidentally misspelled this word, our machine scans of Twitter accounts found clear evidence of an artificial and manufactured push taking advantage of this, and amplifying the decoy hashtag dramatically.

It began with small anecdotal warnings last week from users like Scott Dworkin and others that were scattered around Twitter, and then the effect spread quickly. Soon it was impossible to miss.

Auto Fill Search Results in Twitter Last Week Showing the Decoy Hashtag as second most recommended

Search results were recommending this decoy hashtag #FamilesBelongTogether — often ahead of the true hashtag.

At it’s peak I found 22,000 tweets a day were using the wrong hashtag this last week.

This was no accident. This was manufactured. One of our scans showed that all of the decoy hashtag tweets in just the last 72 hours had a potential reach of well over 8.9 million people.

Our Data: Our findings here are quantified using multiple different tool-sets— some internal and some third party — but the data all correlates with each other and tells one story.

That said, I see this as early data. As I continue to scan and analyze activity surrounding this strategy, I welcome additional research and review from others in the bot detection and disinformation community. I will update this story as more data comes in.

Here is what I have found thus far:

Who Were the Top Users?

I looked at the accounts using the #FamilesBelongTogether decoy tag in two different ways, and found the data from both techniques corroborated each other.

Those Most Often Posting This Decoy Hashtag:

Example of Misspelled Decoy Hashtag

In the during the same 36 hour window, I identified the top users who most often posted messages using the decoy hashtag. Of these top 10 users, all ten posted at minimum a robotic average of 20 hours a day, 7 days a week.

Half of these accounts posted an average of 23 hours per day, 7 days a week, and three posted 24 hours a day, 7 days a week with no sleep.

These users have a high score on a number of bot detection tool-sets such as Indiana University’s Botometer. Some, like NancyGo08054661 had typical robotic long alphanumeric names, likely created en masse via a bot creators script.

Below is the table showing the top #FamilesBelongTogether decoy hashtag posters:

Top 10 Twitter Accounts Using Decoy Hastag the Most Often in 24 hours

Hyper-Tweeters Using this False Hashtag:

Next I looked at the Twitter accounts that were using this decoy hashtag in another way: I scanned the most 4,000 decoy hashtag users of #FamilesBelongTogther during this same time and analyzed their most recent 5,000 tweets.

I then sorted this list by those who had the highest, most robotic “tweet per day” average over this window, and then looked at the top 25 of these users.

By definition all were hyper-tweeters, but even so, we were surprised at the scale. The “least active” of any of the top 25 users was at a highly robotic 361 tweets per day. The highest hyper-tweeter using this tag posted at 936 posts per day.

Needless to say, humans don’t post at these rates, these posters are at least cyborg accounts if not full bots.

Of the top 25 hyper-tweeters of this decoy hashtag only 3 wrote original posts. The other 22 posted retweets of what appear to be mostly human posts. Some from verified government accounts, some from legitimate activist accounts. But ALL of the retweeted content used the misspelled hashtag, seemingly accidentally.

This could enable the robotic accounts to simply amplify the decoy hashtag using actual human posts. Using those who mistakenly spelled the hashtag incorrectly as fodder for their attack.

If so, it allowed the robotic accounts to spread the decoy hashtag using more reputable accounts as unwitting sources. This Decoy Attack Technique made the content more “authentic” and more likely to be retweeted, thus spreading the decoy hashtag further and diluting the real hashtag’s social reach.

Here is the data on the decoy hashtag hyper-tweeters:

Top Hyper-Tweeters of Decoy Hashtag

Impact During the March

The effect during the day of the marches was hard to miss. Ben Wikler from MoveOn noticed and asked:

And with good cause, here were the results from that moment during the exact time marches across the country were occuring:

So who is behind these bots and cyborgs?

More research is needed, and I am digging into that. However, there are some hints from other third party sources as well.

The Hamilton 68 Dashboard is a tool that tracks a relatively basic sample of Twitter accounts that have shown a strong pro-Russian, and pro-Kremlin view and have spread disinformation in the past. This last week they showed a version of the decoy hashtags as the third highest on their list of top trending hashtags promoted by pro-Kremlin accounts.

Another bot tracking resource is BotSentinel.com. This is a project by programmer Chris Bouzy. He also maintains a sample of known bots and bad actors that he tracks.

Over these last weeks, BotSentinel has also identified an up-swell of over 407 highly likely robotic accounts known to spread disinformation using a version of a decoy hashtag.

What do do next:

First: a great deal more study should go into this Decoy Hashtag Technique. This is a relatively new robotic tactic, riffing on an old technique of fake hashtags, and merits a great deal more analysis by the bot and disinfo community in general.

Second: users and social media managers need to become more aware of this type of decoy attack, and be more careful and precise on what hashtags they use, and what posts they retweet.

Many of the most automated accounts “fueled” their attack using tweets from legitimate accounts that simply make a typo on the hashtag. Don’t give them fodder and don’t retweet without carefully checking.

Third: be aware, monitor and know that this new type of robotic attack is now in their repertoire. Be ready. Upcoming events will clearly induce new attacks as we face additional crises, different advocacy battles over controversial issues, and most importantly the upcoming mid-terms.

All social media practitioners need to be monitoring and getting an early warning of any Decoy Hashtag Technique attack along with other bot attacks.

Larry Brillaint

Larry Brilliant is one of the worlds leading scientists focused on how pandemics spread, which in many ways mimic how disinformation can spread out of control and infect our knowledge systems.

We would all be wise to listen to Larry Brilliant’s sage advice about how to stem pandemics, and apply it to social media attacks . He said the most important factor is: “Early Detection and Early Response.”