Should You Be Worried About Twitter Bots?

CLX Forum
CLX Forum
Published in
4 min readAug 9, 2019

In the months before the 2016 U.S. presidential election, there was much concern in the media that Twitter bots and botnets were influencing voters. Here in Canada, the 2018 Ontario provincial election saw evidence of Twitter bots supporting Doug Ford, and now, with a federal election looming in Canada, they’ve hit the news here once again, with reports surfacing in July that the trending hashtag #Trudeaumustgo was the result of a Twitter bot.

What is a social media bot, anyway? Social media bots are seen most commonly on Twitter, although they’ve also been seen on other social media platforms such as Facebook and Instagram. They operate more or less independently, and they’re set up in such a way that they seem to be legitimate, real users. In addition to following users and sending direct messages (DMs), they can tweet content and retweet anything that has been posted by a specific set of users, or that features a specific hashtag.

What are they used for? Twitter bots are often set up in order to steer an online discussion in a particular direction or to influence other users of the platform to change their opinions about an issue.

How can you tell it’s a bot? Although some bots are harder to detect than others, there are some common characteristics that many Twitter bots share:

· The account was created recently

· The account username contains numbers, suggesting automatic name generation

· The account is remarkably active and tweets frequently throughout the day

· The account does a lot of retweeting, and rarely tweets original content

· There is often no biography associated with the account, or the bio is very short

· If there is a user photo, the person may be wearing sunglasses, a clue that the photo may have been generated by a neural network

What is Twitter doing about them? Recently, Twitter has been more responsive to reports of bot accounts. In July 2019, CBC News reported that a bot network involved in a campaign directed at Toronto’s Sidewalk Labs was quickly shut down after Twitter was notified about the accounts and suspended them. Twitter has also cracked down on fake accounts: more than 70 million accounts were suspended over a two month period in 2018, and the company has recently implemented more stringent rules regarding automation.

How automated are they? It’s important to note that these rules don’t ban automation. For example, developers are explicitly allowed to create solutions that generate auto-replies to users who engage with content, or that automatically respond to users with DMs. This can make it hard to identify bot accounts. Many Twitter accounts that appear to be bots — if only because of the sheer volume of posts that they generate — may in fact belong to real people who are using legitimate applications. And there’s always the human factor: in 2018, The Guardian reported that many Russian bots are “unlikely to be automated at all,” and that they are actually accounts run by individuals who are paid to “sow discord in Western politics.”

Twitter bots for good? Although their malicious incarnations dominate the news, Twitter bots aren’t always bad, and they can even be a force for good. The Twitter account LA QuakeBot, for example, uses data from the USGS to tweet about earthquakes in real time. And in Canada, the YK Climate Watch Twitter bot posts each day’s temperature and compares it to the historical average, in order to highlight the impact of global warming on Arctic regions. A Canadian journalist has also set up a Twitter bot that uploads PDF files of “already filed-and-processed Access to Information and Privacy requests that the federal government has not publicly released,” making it easier for journalists and researchers to gain access to the information.

Will we see more bots in the future? Twitter accounts used to power bots are easily purchased on the internet (not just the dark web), for as little as $45. “Vintage” accounts created 5–10 years ago cost more, as do accounts that have a verified phone number associated with them, but they’re still affordable.

Ironically, as more and more legitimate news and information is spread through social media, bots have even more power. Bots gain traction by entering conversations that are quoted by the media and other legitimate users. And with advancements in technologies such as neural networks and AI, there’s little doubt that bad actors will seek to exploit them, making social bots even harder to detect in the future.

Check out the CLX Forum blog and follow the CLX Forum on LinkedIn and Facebook to keep up-to-date with the latest happenings in the world of cyber security.

Interested in becoming a contributor? If you’ve got a topic which you feel is important to your peers, we want to hear from you! Get involved today by visiting: https://www.clxforum.org/get-involved/

--

--

CLX Forum
CLX Forum

The Cybersecurity Leadership Exchange Forum (CLX Forum) is a thought leadership community created by Symantec.