Making Headlines: How Bots Boost Disinformation

Hannah Kruglikov
Foundation for a Human Internet
5 min readJun 23, 2020
Bots are programmed to post autonomously and to mimic human behavior online.

There has been a lot of buzz around the concept of disinformation lately. All over social media and beyond, it is talked about only at the surface level. But disinformation in its many forms involves much more than meets the eye.

In this series, we will demystify disinformation– from the problem, to the anatomy, to the solution.

Lies and false information have been around longer than any of us can remember — so why is it that disinformation has only recently become such a huge problem? The answer is bots.

The term “bots” in and of itself does not imply anything nefarious. A bot is simply an application created to perform an automated task–there are chatbots, informational bots, game bots–you name it. The bots we are interested in here are social media bots.

Social media bots are bots created to interact on social media with a set–and programmed–purpose, often to influence opinions or conversations online by driving engagement around certain hashtags, keywords, or links.

These bots are programmed to post autonomously and to mimic human behavior online, and can easily (and cheaply) be purchased in large networks called botnets and programmed for a purpose of the owner’s choosing.

With a large enough network of bots, the owner, or botnet herder, can effectively shape conversations, create trending topics, and manipulate public opinion in their favor.

In this visualization of the spread of the #SB277 hashtag about a California vaccination law, dots are Twitter accounts posting using that hashtag, and lines between them show retweeting of hashtagged posts. Larger dots are accounts that are retweeted more. Red dots are likely bots; blue ones are likely humans. Source: Onur Varol

To give you an idea of the scale of the problem, Twitter removed over 70 million accounts in May and June of 2018 in an effort to prevent the spread of false and “spammy” content, and Facebook reportedly removed 3.2 billion fake accounts between April and September of 2019 in an effort to do the same.

We saw this in the 2016 United States presidential campaigns, during which 1 in 5 election-related tweets came from bots, according to a study from the University of Southern California. A study by researchers from the University of California at Berkeley and Swansea University found that this bot involvement may have accounted for 3.23 percent of the votes for Trump in the 2016 election, a margin which was “possibly large enough to affect the [outcome]”.

Trolls can engage in the same sorts of online activity as bots can, but with some notable differences, due to the fact that trolls are human. The term “trolls” refers to real people pushing hateful and often false information on social media platforms in order to derail or manipulate conversations.

Source: https://tech.msu.edu/news/2019/10/beware-of-bots-and-trolls/

Trolls can be — but are not necessarily — paid to post in favor of a certain person or cause and promote a certain point of view in conversations online, sometimes in organized groups called “troll farms”.

In some cases, this payment and directive even comes from the government, as in the case of Chinese wumao. Because they are real people, trolls are more expensive than bots, but are also able to manipulate conversations and people to greater effect than an automated program (even an advanced one) can, and can produce higher-quality content and maintain online presences that look more convincingly like those of real people (because, well, they are).

While bots do post their own content online, a large part of a bot’s role in spreading disinformation is not to create the content itself, but to create buzz, engagement, and seeming credibility around dangerous or extreme content put out by real people.

These sources can sometimes be trolls themselves, or can simply be extremists (politically or otherwise), conspiracy theorists, or any other character who hold and express these sorts of views online.

Extreme views (sometimes false ones) have been held and voiced by real people since long before the internet. What bots do is give these voices unprecedented reach.

Reach on social media is influenced heavily by algorithms, those enigmatic forces which boost some content to the top of our feeds while other content is buried. Twitter’s algorithm, for example, determines which topics are trending by looking not only at the amount of engagement but by how quickly engagement is building–trends are marked by sharp spikes in engagement, rather than growth over a longer period of time.

Say a fringe account with extreme views releases a tweet stating their (unfounded) belief about a politician. Under normal circumstances, this would be seen only by a small audience of connections–but if a carefully-programmed network of bots latches onto it, immediately liking, sharing, and replying, then the platform’s algorithm suddenly sees it as high-quality emerging content and boosts it as a trending topic, where people like you and me see it and read those engagements as proof of credibility.

So, what now?

We know how disinformation spreads and how powerful it can be. All good information, but what do we do about it? Check back for our next installment, where we’ll cover all the ways you can combat disinformation from the comfort of your own phone.

If you like what you’re reading, be sure to applaud this story (did you know that you can hold down the applaud button and it’ll keep adding claps–it’s addictive!) and follow our channel!

What’s humanID?

humanID is a new anonymous online identity that blocks bots and social media manipulation. If you care about privacy and protecting free speech, consider supporting humanID at www.human-id.org, and follow us on Twitter & LinkedIn.

--

--

Hannah Kruglikov
Foundation for a Human Internet

UC Berkeley Economics, Class of 2021. Marketing and Research for humanID. Check us out! https://www.human-id.org/