Reddit and the Issues With Online Anonymity

Neel Patel
SI 410: Ethics and Information Technology
11 min readFeb 22, 2022
Reddit wallpaper. Image from Upvoted.

Have you ever hesitated to do something online because you didn’t want anyone (a specific person, the company providing the online service, etc.) to find out that it was you doing it? Perhaps searching for an odd piece of information on Google? Or liking a vulgar post on Instagram? Well, the sort of hesitation that often comes with such actions likely won’t be prompted when interacting on Reddit, the forum-based social media platform that describes itself as the “front page of the Internet.” Unlike Google, Instagram, and countless other online services, Reddit allows for account creation without supplying any personally identifiable information. That’s right. No tedious phone or email validation is necessary. All you need to become a Reddit user (Redditor) is a username and password; and if you’re not sure about a username, a pretty strange yet funny list of potential usernames is given to you! As a result of this policy, Redditors can be anonymous to both their peers as well as Reddit itself. More specifically, a Redditor can be pseudonymous, meaning they’re anonymous until they’ve revealed enough information to expose their true identity. Of course, Reddit, the government, and your Internet service provider will all know your device’s IP address, but that’s a piece of information used to identify computers, not people. All in all, the ability to be incognito enables Redditors to exercise freedom in a manner that may be unavailable to them on other services. This freedom, however, comes at a steep cost. Online anonymity on Reddit affords a degree of flexibility that fosters malicious and deceitful behavior.

Reddit’s Content Policy

To understand how exactly Reddit’s anonymous nature tolerates despiteful actions, it’s helpful to evaluate the platform’s Content Policy, which includes eight rules and a non-exhaustive list of enforcement mechanisms. Here, I’ll briefly summarize the rules and provide the list of enforcement mechanisms. Note that in the context of Reddit, a community (subreddit) is a specific forum and often indicated with a preceding “r/” (e.g., r/wholesomememes).

Rules

  1. Be humane: don’t harass, bully, or threaten other users
  2. Follow community rules and don’t manipulate content
  3. Don’t invade the privacy of another user
  4. Don’t post or suggest sexual content involving minors
  5. Don’t impersonate another entity (person, group of people, organization, etc.)
  6. Label content and communities appropriately (e.g., graphic)
  7. Don’t post illegal content
  8. Don’t crash Reddit or interrupt its use

Enforcement Mechanisms

  • “Asking you nicely to knock it off”
  • “Asking you less nicely”
  • “Temporary or permanent suspension of accounts”
  • “Removal of privileges from, or adding restrictions to, accounts”
  • “Adding restrictions to Reddit communities, such as adding NSFW tags or Quarantining”
  • “Removal of content”
  • “Banning of Reddit communities”

On the surface, the combination of these rules and enforcement mechanisms seems great for Redditors. As long you’re not being rude or creepy, you can more or less be your true self! Also, if a rule is broken, there are several enforcement mechanisms in place to handle the situation. Digging deeper, however, several problems emerge with Reddit’s Content Policy.

Firstly, rules can be broken at a tremendous scale. As of January 2021, Reddit has over 50 million daily active users, 100,000 active communities, and 13 billion posts and comments. At this magnitude, it’s virtually impossible for Reddit administrators to oversee every piece of content on their platform. This, of course, is an issue on anonymous and non-anonymous services alike, but is inherently escalated on anonymous services since users don’t personally know each other and therefore may not feel compelled to report on issues. Notably, each Reddit community has a set of moderators. But even then, the number of users in a community can severely outnumber the number of moderators.

Moreover, each of the enforcement mechanisms laid out in Reddit’s Content Policy is reactive, rather than proactive. They’re more so punishments, or consequences, instead of actual enforcement mechanisms. Significant harm can occur before a situation is brought under control.

Lastly, anonymity may cloud the judgment of users, administrators, and moderators. It can be easy to forget that Redditors are people, not robots, when no one is aware of anyone’s identity. Accordingly, efforts to discipline or call out rulebreakers may lag in time or lack altogether.

Issues with Online Anonymity

Online platforms such as Reddit that offer anonymity often do so with the intent of expanding free speech, protecting privacy, and providing safety. The issues with online anonymity are none of these intentions. I mean, who doesn’t want their voice heard? Rather, problems arise from online anonymity since it makes people feel shielded. It’s sort of like they believe that they’re untouchable and can’t face punishments as a result of their actions. A report by the Pew Research Center shows that online anonymity incites uncivil and manipulative behaviors. In turn, hate speech, misinformation, troll posts, and a variety of other forms of spiteful or disingenuous information are introduced. A similar sentiment is held by Kathleen A. Wallace in the opening remarks of Online Anonymity:

Anonymity can also be brought about in a variety of ways and there are many purposes, both positive and negative, that anonymity could serve, such as, on the positive side, promoting free expression and exchange of ideas, or protecting someone from undesirable publicity or, on the negative, hate speech with no accountability, fraud or other criminal activity.

Wallace adds that the purposes of anonymity (for both those seeking it and those offering it) can be grouped into three non-mutually exclusive categories: furthering action, protecting people (or organizations) from actions by others, and preserving processes. It can be easy to overlook each of these categories as only capable of leading to positive outcomes, but from each, harmful results can stem. What if hate crimes are being furthered? What if a cyberbully is being protected? What if a racist process is being preserved?

Two of the more prominent issues associated with online anonymity appearing on Reddit are hate speech and misinformation. Both issues are complicated by the fact that they take on several types. This can make it challenging (for Reddit administrators or moderators) to recognize when an act of hatred or deception is taking place. An article from the Social Science Research Council identifies three levels of hate speech: early warning signs, violence and incitement, and dehumanization and demonization. The image below, which is from a First Draft article on fake news by Claire Wardle, shows different types of misinformation. Disinformation, as explained by Wardle, is a subset of misinformation involving “the deliberate creation and sharing of information known to be false.”

Different types of misinformation (and disinformation), as determined by Wardle. Image from First Draft.

Controversies on Reddit Stemming from Online Anonymity

Throughout its history, Reddit has housed numerous controversies. In large part, these controversies are consequences of affordances offered by online anonymity.

r/fatpeoplehate

At a peak of over 150,000 followers, r/fatpeoplehate (FPH) represented a subreddit revolving around encouraging healthy bodies. This in and of itself isn’t terrible. Eating healthily and exercising are commonly associated with preventing physical and mental health issues. FPH, however, didn’t exactly maintain a friendly environment in advocating its message. Instead of giving advice or holding open discussions on how to lead a healthy life, the FPH community relied on harassing and ridiculing fat people. Under the veil of online anonymity, Redditors posted images of overweight or obese individuals and proceeded to fat shame them. Indeed, FPH truly reflected its name, as it cultivated a hateful and toxic atmosphere for fat people. The FPH community was particularly critical of the fat acceptance and Health at Every Size movements, which aimed to promote body diversity and remove the negative connotations surrounding obesity and fatness. As a consequence of this, it’s not entirely surprising that FPH’s popularity grew significantly when Tess Holliday, an overweight model and spokesperson for the fat acceptance movement, landed on the cover of People magazine during 2015. In the same year, Reddit would ban FPH and four smaller subreddits, as they stood in direct opposition to its new anti-harassment policy. The banning led to an outcry by Redditors, with many petitioning to have then Reddit CEO Ellen Pao fired. Overall, FPH and the fat shaming it engendered depicts Wallace’s point that online anonymity can serve to further actions.

Example posts from FPH. Image from Reddit (via Wayback Machine).

The “Chimpire”

Though societies tend towards a state of constant flux, one element has remained pervasive throughout humanity’s shared history: racism. It’s little wonder, then, that racism found itself a place to develop on Reddit. From 2013 to 2015, a massive network of subreddits dedicated to racism against black individuals flourished: the “Chimpire.” Spanning over 40 subreddits, the Chimpire upheld an anti-black sentiment and promoted racist propaganda. The names of the subreddits that comprised the Chimpire are quite telling of the network’s nature. They included the following:

  • r/greatapes and r/apewrangling (dehumanization of blacks by comparing them to apes)
  • r/trayvonmartin and r/ferguson (references to publicized shootings of black teenagers)
  • r/detoilet and r/chicongo (racist stereotypes about cities associated with large black populations)
  • r/niggers and r/watchniggersdie (direct embeddings of the N-word)

The Southern Poverty Law Center found that the Chimpire’s content varied tremendously, from discussions of achieving a societal state that’s free of blacks to violent videos of black men dying. It’s easy to get away with supporting or posting such content when you’re anonymous, so imagine the type of backlash that you could receive by displaying hatred towards a particular race on, say, Facebook, where people know who you are. The Chimpire perfectly exemplifies how online anonymity can protect individuals from actions, as argued by Wallace.

The same anti-harassment policy that led to FPH’s banning later resulted in the banning of several of the Chimpire’s subreddits and eventually the network’s demise.

2016 United States Presidential Election Misinformation

Perhaps one of the most polarizing events in recent American history, the 2016 United States (US) presidential election emerged as a hot topic on Reddit in the year leading up to its occurrence. There’s nothing wrong with political debate. In fact, it’s one of the great liberties of living in a free nation. Nevertheless, when discussions are held anonymously, it becomes quite difficult to separate fact from fiction as users lack awareness of who or what is reliable. This problem demonstrated itself on Reddit in the context of discourse about the election. According to a study by researchers at the University of Massachusetts, Reddit saw a substantial number of links to websites spreading misinformation about the election. The share of such links was dominated by subreddits supporting the Republican party (e.g., r/the_donald), which won the election. The study suggests that the misinformation efforts may have been coordinated and thus connect them directly to a concern recognized by Wardle:

When messaging is [coordinated] and consistent, it easily fools our brains, already exhausted and increasingly reliant on heuristics (simple psychological shortcuts) due to the overwhelming amount of information flashing before our eyes every day. When we see multiple messages about the same topic, our brains use that as a [shortcut] to credibility.

Graph showing the increase in Reddit posts linking to pages with false or misleading information during the buildup to the 2016 US presidential election. Image from arXiv.

COVID-19 Misinformation

Do you remember shopping at a grocery store with empty shelves? Or reading reports of people hoarding toilet paper? I’m referring to the onset of the COVID-19 pandemic. In the pandemic’s initial stages, widespread paranoia occurred. To a considerable degree, misinformation on Reddit contributed to this paranoia. The consequences of such misinformation were likely amplified by the presence of online anonymity, as Redditors could not readily verify whether or not they were communicating with someone who had a credible epidemiological background. The two main subreddits used for disseminating COVID-19 information were r/coronavirus and r/china_flu. Reddit announced in February 2020 that r/coronavirus would be the platform’s official forum for COVID-19 discussions. This had the unintended side effect of r/china_flu becoming heavily filled with conspiracy theories about COVID-19, particularly those suggesting that the Chinese government constructed the virus in a lab. Over time, COVID-19 misinformation spread to numerous subreddits, as indicated by an analysis carried out by the COVID-19 Healthcare Coalition (C19HCC). The analysis linked misinformation occurrences to bot activity, which is rather significant since Wardle identified bots as a mechanism for executing “sophisticated disinformation campaigns.”

Tableau dashboard showing subreddits containing suspicious COVID-19 information (r/coronavirus and r/china_flu have relatively low suspiciousness scores due to the sheer number of posts and comments they receive). Image from C19HCC.

Conclusion

All told, online anonymity enables hateful and deceptive actions to thrive on Reddit. FPH and the Chimpire — through fat shaming and racism, respectively — represent instances of online anonymity furthering hate speech. Misinformation on Reddit about the 2016 US presidential election and COVID-19 denotes online anonymity leading to deceit. Ultimately, Reddit illustrates the fact that when freedom provided by online anonymity is left unchecked, issues arise.

References

Abad-Santos, A. (2015, June 11). Why Reddit’s ban on Fat People Hate is ripping it apart. Vox. Retrieved March 6, 2022, from https://www.vox.com/2015/6/11/8767035/fatpeoplehate-reddit-ban

Bahador, B. (2020, November 17). Classifying and identifying the intensity of hate speech. Social Science Research Council. Retrieved February 17, 2022, from https://items.ssrc.org/disinformation-democracy-and-conflict-prevention/classifying-and-identifying-the-intensity-of-hate-speech/

Dewey, C. (2015, June 12). Censorship, fat-shaming and the ‘Reddit revolt’: How Reddit became the Alamo of the Internet’s ongoing culture war. Washington Post. Retrieved March 6, 2022, from https://www.washingtonpost.com/news/the-intersect/wp/2015/06/12/censorship-fat-shaming-and-the-reddit-revolt-how-reddit-became-the-alamo-of-the-internets-ongoing-culture-war/

Hankes, K. (2015, March 10). Black hole. Intelligence Report. Retrieved March 7, 2022, from https://www.splcenter.org/fighting-hate/intelligence-report/2015/black-hole

Hern, A. (2015, December 30). How Reddit took on its own users And won. The Guardian. Retrieved March 7, 2022, from https://www.theguardian.com/technology/2015/dec/30/reddit-ellen-pao

McEwan, B. (2017, July 10). CNN-Reddit saga exposes tension between the internet, anonymity and power. The Conversation. Retrieved February 15, 2022, from https://theconversation.com/cnn-reddit-saga-exposes-tension-between-the-internet-anonymity-and-power-80662

MITRE. (2020, April 14). Dis/Mis information related to COVID-19 on Reddit. COVID-19 Healthcare Coalition. Retrieved March 9, 2022, from https://c19hcc.org/resources/detecting-misinformation/

Newsround. (2021, February 26). Social media: Should people be allowed to be anonymous online? BBC. Retrieved February 17, 2022, from https://www.bbc.co.uk/newsround/56114122

Nithyanand, R., Schaffner, B., & Gill, P. (2017). Online political discourse in the Trump era. arXiv. https://arxiv.org/abs/1711.05303

Rainie, L., Anderson, J., & Albright, J. (2017, March). The future of free speech, trolls, anonymity and fake news online. Pew Research Center. https://www.pewresearch.org/internet/2017/03/29/the-future-of-free-speech-trolls-anonymity-and-fake-news-online/

Reddit. (n.d.-a). Homepage: Reddit by the numbers. Reddit. Retrieved February 15, 2022, from https://www.redditinc.com/

Reddit. (n.d.-b). Reddit content policy. Reddit. Retrieved February 15, 2022, from https://www.redditinc.com/policies/content-policy

Strain, D. (2020, July 2). As the coronavirus spread, 2 social media communities drifted apart. CU Boulder Today. Retrieved March 9, 2022, from https://www.colorado.edu/today/2020/07/02/coronavirus-spread-2-social-media-communities-drifted-apart

SI 410

Wallace, K. A. (2008). Online Anonymity. The Handbook of Information and Computer Ethics, 165–184. https://doi.org/10.1002/9780470281819.ch7

Wardle, C. (2017, February 16). Fake news. It’s complicated. First Draft. Retrieved February 15, 2022, from https://firstdraftnews.org/articles/fake-news-complicated/

--

--