Quarantined! Examining the Effects of a Community-Wide Moderation Intervention on Reddit
This blog post summarizes a paper on our evaluation of the effects of quarantining, a community-wide moderation intervention used by Reddit to label offensive communities as inappropriate. This paper was published in the ACM Transactions on Computer-Human Interaction and was invited to be presented at the 25th ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW).
Over the past few years, many scholars and activists have persistently called for social media platforms to remove content that promotes coded racism, misogyny, and conspiracy theories. At the same time, there is a growing recognition that platforms need to better facilitate critical discussions on sensitive social issues like race, masculinity and immigration. In light of the hotly contested arguments about when and how they should intervene, platforms have begun to consider the use of softer alternatives to outright bans. In this paper, we examine the influence of one such approach, called quarantining, deployed by the popular social media platform Reddit.
When Reddit quarantines a subreddit, visitors to it are shown a splash page that requires them to explicitly opt-in to viewing its content (see Fig. 1 above). Additionally, the quarantined subreddit and its posts stop appearing in Reddit’s indexing and search results. We present in this paper a case-study that examines the effects of quarantining two influential Reddit communities — r/TheRedPill (TRP) and r/The_Donald (TD).
Quarantining fundamentally differs from banning, which has proven effective in reducing hate-based behavior. Banning shuts a space down permanently, forcing its members to leave. On the other hand, the quarantined space, though isolated from the rest of the site, is still available for member participation, although with some design friction. We chose to focus on the immediate effects of quarantine on the “treated” subreddits — TRP and TD — given that these forums were still accessible to users. We ask the following research questions:
RQ1: How were the participation levels within TRP and TD affected by the quarantine?
RQ2: To what extent was the influx of new users to TRP and TD affected by the quarantine?
RQ3: How was the use of misogynistic and racist language within TRP and TD, respectively, affected by the quarantine?
Methods. We answer our research questions by examining observational data from Reddit through a temporal analysis of TRP and TD’s subreddit timelines. These included all comments and submissions made in the quarantined subreddits six months before to six months after they were quarantined. Working with over 85M Reddit posts and comments from Reddit, we chose metrics that include posting volume and frequency of words used from hate speech lexicons. We then used causal inference methods to examine variations in levels of interaction and racism/misogyny, accounting for ongoing temporal trends within TRP and TD in addition to Reddit-wide trends. Figure 2 above provides an overview of the research pipeline employed in this paper.
Findings. We determined that the quarantine severely disrupted the influx of new users to both subreddits — the rate of new users joining TRP and TD dropped drastically, by over 79.5% and 58%, respectively. We also found that already assimilated users within both subreddits exhibited changes in posting volume: activity levels of TRP users decreased substantially (by 52.4% and statistically significant w.r.t. corresponding control subreddits), but activity levels of TD users increased slightly (by 3.8%, but not statistically significant w.r.t. corresponding control subreddits). Despite these changes in posting activity, we found scant changes with respect to levels of misogynistic (TRP) or racist (TD) language.
Implications. For HCI, our work demonstrates the efficacy of introducing simple design friction to counteract antisocial behavior in online communities. For online moderation, we provide a computational framework that internet platforms can adopt to evaluate the effectiveness of moderation interventions. Given the philosophical debates around whether platforms should be allowed to ban certain types of speech, quarantining provides a softer alternative to banning. Our findings show one way that platforms can use low-cost design solutions like quarantining to compromise between containing antisocial activities and preserving freedom of speech.
For more details about our motivations, methods, findings and design implications, please check out our full paper that has been published in TOCHI in 2022. For questions and comments about the work, please drop an email to Shagun Jhaver at shagun.jhaver [at] rutgers [dot] edu.
Citation:
Eshwar Chandrasekharan, Shagun Jhaver, Amy Bruckman, and Eric Gilbert. 2022. Quarantined! Examining the Effects of a Community-Wide Moderation Intervention on Reddit. ACM Trans. Comput.-Hum. Interact. 29, 4, Article 29 (August 2022), 26 pages. https://doi.org/10.1145/3490499
Note: Shagun and Eshwar, the first two authors, contributed equally to this research.