Shady Business: The Dangers of Shadowbanning

Hannah Kruglikov
Foundation for a Human Internet
4 min readJul 8, 2020

Recently, a member of the humanID team browsing on an incognito window opened a tweet which humanID had replied to and, much to his surprise, did not see our reply. He scrolled to the bottom, hit the “Show more replies” button, and there it was.

A reply from humanID’s Twitter account was hidden in the “More replies” section. Screenshot by author, from https://twitter.com/Joseph_Marks_/status/1276205347720564737.

What’s going on here?

While the general public (and increasingly, large companies) seems to agree that hate speech and dangerous mis/disinformation should be moderated and removed by social media platforms, any manipulation of content beyond that is generally frowned upon. Thus, social media platforms cannot simply delete or block users whose content they don’t like—the public wants moderation, not censorship.

So, if a company wants to keep (what it views as) unsavory content off of its platform, without inciting anger over censorship, what is it to do? Luckily for them, the tech world has an answer for this quandary: shadowbanning.

The term “shadowbanning” might sound very cryptic and mysterious, but almost anyone who has used social media has likely seen it in action. Shadowbanning refers to all methods by which online platforms bury specific content without blocking it outright, and exists on all major social media platforms in some form.

Twitter’s “More replies” section is a prime example: the content is still there, but it has been purposely hidden, greatly limiting its reach and visibility. Some Instagram users have noticed that their hashtagged posts were not appearing under searches for that hashtag, and Facebook’s algorithm has been shown to systematically limit the reach of certain content, using methods which it patented in 2019.

Source: Dado Ruvic/Reuters

Why is it a problem?

Social media platforms have clear terms of use which lay out what behavior is and isn’t allowed on the platform. If users break these terms of service, they are liable to have the offending post(s) deleted, or even to be banned from the platform altogether. In this standard content moderation process, users know what is and isn’t allowed, and if their post is removed or they are banned, they are made aware of it and given a reason.

When content moderation occurs outside of these stated guidelines, however, transparency—and with it, accountability—is lost. That is the problem.

Shadowbanning is very aptly named—it is moderation that happens silently and in secret. Because content is not actually removed nor users actually banned, platforms are able to blacklist users and effectively silence certain voices without users ever finding out.

Currently, shadowbanning is generally being employed to limit the reach of accounts which are behaving in a “spammy” manner (we believe this is what got us buried in the “More replies” section), or who are otherwise acting in a way which the platform does not like, but which does not technically violate the terms of use. You could call this moderation, then, as its intention is generally to foster more civil discussion by reducing the influence of users who appear to be detracting from other users’ experience.

No matter what you call it, though, this practice gives platforms the power to decide at will what is and isn’t acceptable content, and to effectively block out content they deem unacceptable—censorship by definition.

What next?

While the social media team at humanID is upset at our content being blocked, we are even more concerned about the potential harm of shadowbanning on the integrity of our information ecosystem. Content could be systematically hidden in a manner which skews conversations and debates in the direction of the platform’s choosing; which buries any criticisms of executives, politicians, or governments with which the platform is affiliated; or so much more. If censorship is to fully be understood and tackled, shadowbanning must also have its uncomfortable turn in the spotlight.

If you like what you’re reading, be sure to applaud this story (did you know that you can hold down the applaud button and it’ll keep adding claps–it’s addictive!) and follow our channel!

What’s humanID?

humanID is a new anonymous online identity that blocks bots and social media manipulation. If you care about privacy and protecting free speech, consider supporting humanID at www.human-id.org, and follow us on Twitter & LinkedIn.

--

--

Hannah Kruglikov
Foundation for a Human Internet

UC Berkeley Economics, Class of 2021. Marketing and Research for humanID. Check us out! https://www.human-id.org/