Does Transparency in Moderation Really Matter?: User Behavior After Content Removal Explanations on Reddit

Shagun Jhaver
ACM CSCW
Published in
4 min readAug 13, 2019

--

This blog post summarizes a paper that investigates how removal explanations affect future user activity on the social media site Reddit. This paper will be presented at the 22nd ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW) in Austin, Texas. This paper received a Best Paper Award at CSCW.

Social media platforms usually make content moderation decisions without explaining to the end-users why those decisions were made. Prior research suggests that this secretiveness often frustrates users who suspect that the platforms are biased in some ways. Would it help platforms to instead be transparent about their processes? Would it improve community outcomes if platforms engage with users and explain the reasoning behind their moderation decisions?

In this paper, we contribute one of the first studies that explores the effects of transparency in moderation decisions on user behavior. Our research focuses on one important aspect of transparency in content moderation — the explanations about why users’ submissions are removed. Using a sample of 32 million Reddit posts, we characterize the removal explanations that are provided to Redditors, and link them to measures of subsequent user behaviors — including future post submissions and future post removals.

Figure 1: An example explanation message provided through a comment to a removed submission.

On Reddit, moderators can provide removal explanations in a variety of ways: They can comment on the removed post with a message that describes the reason for removal (Figure 1). Alternatively, they can flair the removed post (Figure 2), or send a private message to the submitter. Moderators can either choose to compose the removal explanation themselves, or they can configure automated tools to provide such explanations when the submission violates a community guideline. Our analyses investigate how these different modes and sources of removal explanations affect user behaviors differently.

Figure 2: An example explanation message provided by flairing the removed submission. Username has been scrubbed to preserve the anonymity of the submitter.

Applying topic modeling techniques on a corpus of 22K removal explanations, we found that explanations not only provide information about why submissions are removed, they also reveal the mechanics of how moderation decisions are made, and they attempt to mitigate the frustrations resulting from content removals.

Figure 3. Flowchart depicting the data preparation. We collected posting history for a sample of <user, subreddit> pairs between March and October 2018. Next, we split this posting history for each pair and aggregated posts to create H_past and H_future datasets.

In order to analyze the effects of past removal explanations on future behaviors, we collected the posting history of 4.7 million <user, subreddit> pairs between the period March 2018 and October 2018 (Figure 3). Building logistic regression models on these data, we made the following observations:

[O1] High past removal rate for a user is associated with (a) lower odds of that user posting in the future, and (b) higher odds of that user experiencing a post removal in the future.

[O2] When moderated users are provided explanations, (a) their odds of posting in the future reduce, and (b) their odds of experiencing a post removal in the future reduce.

[O3] Having a higher fraction of explanations offered through comments, rather than through flairs, is associated with (a) higher odds of users posting in the future, and (b) lower odds of users experiencing a post removal in the future.

[O4] Explanations provided by human moderators did not have a significant advantage over explanations provided by bots for reducing future post removals.

Our calculations suggest that if 100% of removals on Reddit were provided explanations, the odds of future post removals would reduce by 20.8%. Thus, offering explanations could result in a much reduced workload for the moderators. We also found that only a small proportion (0.6%) of all Reddit communities in our data chose to provide removal reason messages. Thus, explanations are an underutilized moderation mechanism, and site managers should encourage moderators to offer explanations for content removals. Providing explanations may also communicate to the users that the moderator team is committed to providing transparency and being just in their removals.

Although our regression analyses establish the effectiveness of explanation comments over explanation flairs ([O3]), our data show that flairs are used much more frequently than comments to provide explanations (87% versus 11%). This may be because the flairs are much shorter, and therefore, easier for the moderators to provide than comments. Yet, our findings suggest that it may be worthwhile for Reddit moderators to take the time to provide elegant explanations for content removals through comments rather than tagging the post with a short flair. At a broader level, these results indicate that conducting amiable, individualized correspondence with moderated users about their removed posts may be an effective approach for content moderators to nurture potential contributors.

We also note that [O4] suggests an opportunity for deploying automated tools at a higher rate for the purpose of providing explanations. We expect that the field of explainable AI can provide valuable insights for improving the quality of explanations provided by automated tools.

In summary, our results suggest that taking an educational, rather than a punitive, approach to content moderation can improve community outcomes. For more details about our methods, findings, and design implications, please check out our full paper that will be published in Proceedings of the ACM on Human-Computer Interaction (CSCW) 2019. For questions and comments about the work, please drop an email to Shagun Jhaver at sjhaver3 [at] gatech [dot] edu. Citation:

Shagun Jhaver, Amy Bruckman, and Eric Gilbert. 2019. Does Transparency in Moderation Really Matter?: User Behavior After Content Removal Explanations on Reddit. In Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW, Article 150 (November 2019). ACM, New York, NY. 27 pages. https://doi.org/10.1145/3359252

--

--