Do Redditors Support Community Rules?

In r/ChangeMyView the answer is yes. But also no.

Vinay Koshy
ACM CSCW
5 min readSep 26, 2023

--

A close-up screenshot of Hammurabi’s code, one of the first recorded sets of rules (image source: ctj71081 on Flickr)
One of the first recorded sets of rules, Hammurabi’s Code (image source: ctj71081 on Flickr)

This blog post summarizes the paper Measuring User-Moderator Alignment on r/ChangeMyView. This paper will be presented at the 26th ACM Conference on Computer-Supported Cooperative Work and Social Computing, a top venue for social computing scholarship. It will also be published in the journal Proceedings of the ACM (PACM). The paper can be viewed here

Many social media platforms, like Reddit, Discord, and Twitch, rely on a community based content moderation model. Rather than enforcing a single set of rules platform-wide, individual communities within the platform are allowed to have their own bespoke moderation policies. Typically these rule sets are created and enforced by small teams of volunteer content moderators. In theory, this is great. Communities can adopt the rules best suited to their individual goals, and can enforce these rules in a context-sensitive manner. However, moderators operate with relatively little oversight — neither community members nor platforms themselves have much say in how rules are enforced. Can communities resolve disagreements over moderation policy?

Given the fluidity of online community memberships, it’s possible that users who disagree with a community’s rules simply move to other groups. On the other hand, network effects could make this difficult — its hard to move to another community if nobody else is doing it. Although CSCW researchers have argued for a number of potential interventions and tools to help improve rule alignment within communities (e.g. removal notifications for transparency, jury systems, formal voting systems for creating new rules, etc), we don’t really understand the exact degree or nature of misalignments between users and moderators. This makes it hard to know which communities would benefit most from which interventions.

To help fill this gap in the literature, we launched a study in collaboration with the subreddit r/ChangeMyView. Our primary goal was to measure rule alignment between users and moderators. To solidify the notion of “alignment,” we divided it into two primary axes: a “policy-practice” axis, and an “awareness-support” axis. This yielded four measures of alignment:

  1. Policy-awareness: Do users know what the rules are?
  2. Practice-awareness: Do users know how moderators apply those rules to particular cases?
  3. Policy-support: Do users agree with the rules?
  4. Practice-support: Do users agree with how moderators apply those rules?

One can imagine a community in which some of these aspects of alignment are high while others are low. For example, users might broadly support a community rule banning the posting of misinformation (high policy-support), but strongly disagree over whether particular posts contain misinformation or not (low practice-support).

To operationalize these measures, we distributed a two-part survey to participants in r/ChangeMyView (r/CMV). In the first part of the survey, users audited a series of moderation cases previously handled by r/CMV moderators. For each case, users predicted how r/CMV moderators would handle the case, and stated their preferred case outcome. Comparing user responses against the real-life decisions made by moderators allowed us to measure policy-support and policy awareness. In the second part of the survey users were asked questions about community rules directly. First, they were asked to identify r/CMV rules from a list containing both actual r/CMV rules and a set of decoys (policy-awareness). Then they were shown a list of the actual community’s rules and were asked to rate their support for each one on a 5-point Likert scale (policy-support).

A recreation of a comment previously posted for survey-takers to readjudicate over. The title of the post associated with the comment appears at the top, followed by the post body. A horizontal line separates the post from the comment text. The main comment text is highlighted in yellow and appears nested below a comment to which its replying. Both comments are nested below the post.
Fig 1: An example of how we recreated past comments for survey-takers to view

In general our findings support relatively high policy alignment and low practice alignment. For 4 of the 5 r/CMV rules, at least 70% of respondents rated their support for the rule at a minimum of 4, suggesting high policy support. Participants also identified the real subreddit rules from a list at significantly higher rates than the decoy rules (82% vs 30% selection rate), suggesting some level of rule awareness.

Two bar chars depicting policy support and awareness results. On the left, a bar chart indicating the distribution of likert ratings for community rules. These general skew towards positive ratings, with rule 4 being the most strongly skewed, and rule 3 being the most balanced. On the right: a bar chart containing the selection rate of real and decoy rules in the policy awareness task. Real rules were selected at roughly twice the rate of decoy rules.
On the left: Distribution of Likert ratings for each of the r/CMV Rules. Results demonstrate a clear skew in favor of positive ratings. On the right: proportion of times each rule was selected in our rule awareness task. Real r/CMV rules are in magenta, while decoy rules are in grey. Real r/CMV rules are selected at almost twice the rate of decoy rules.

For our practice alignment measures, we use a hierarchical Bayesian model to estimate the correlation between the proportion of users supporting or predicting removal, and the actual decision made. These correlations were generally low-to-moderate for both measures, ranging from .15 for rule 1 (95% CI: .02-.27) to .45 for rule 4 (95% CI: [.31-.59]). In general users supported and predicted comment removals at a much lower rate than their actual occurrence. Notably, these results hold up even after conducting some adjustment for potential survey response bias, suggesting a degree of robustness to our findings.

A series of interval plots containing the estimated correlations for our practice-alignment measures across rules. In general the awareness and support intervals are fairly similar for each rule. They usually have a width of about .25, and center on values that range from low (.15 for rule 1) to moderate (.4 for rule 4)
Estimated 95% CIs for the correlation between user supplied labels and ground truth moderator decisions. In general these correlations range from low to moderate, suggesting practice support and awareness is limited.

Although our study was constrained to a single subreddit, we believe there are two key takeaways for online communities more generally. First, we found policy support to be high, but practice support to be low. A possible explanation is that because the rules of a community are relatively transparent on Reddit (usually displayed on the sidebar of most communities) it’s easier for users to opt into communities where they support the existing policies. In contrast, there aren’t many ways for users to see how moderators apply the rules, making it harder for users to identify communities where they support the specific manner in which rules are enforced. Platforms may benefit from providing greater transparency over the specifics of community rule enforcement.

Second, our study serves as valuable proof-of-concept for how alignment can be measured. At present moderators are not provided with built-in tools for conducting opinion polling to guide moderation approaches. Rather than having platforms dictate interventions for improving rule alignment, we argue that communities should be empowered to conduct their own internal polling and adopt suitable interventions themselves.

Interested in checking our work? Exploring other hypotheses? Data and code are available under the publications tab here: https://vinyoshy2.github.io/
Please contact vkoshy2@illinois.edu with any questions!

--

--