Quick, Community-Specific Learning: How Distinctive Toxicity Norms are Maintained in Political Subreddits
--
Original paper published at ICWSM 2020
Main findings:
Studying political communities on Reddit with distinctive and stable toxicity norms, we find that most of the norm conforming among newcomers occurs through pre-entry learning — newcomers adjust most to community toxicity norms while making their first comment. Interestingly, newcomers on average seem to know enough about a community’s toxicity norms before they engage with the community and adjust their behavior accordingly. However, this adjustment to a community’s norm is not permanent, that is, newcomers revert back to their original behavior in the other subreddits that they participate in.
Interesting because:
People have always adjusted their behavior according to the prevailing norms. How you hold a conversation, even with the same set of people, in an office is going to be very different from your conversations at a bar or even a company retreat. The surprising finding in this research is really how quickly users on average appear to adjust to the toxicity norms. The largest adjustment to norms happens as they are making their first comment in the community.
Implications:
Since much of the norm conforming among newcomers occurs before they join the community, to improve norm conformity, we recommend that communities invest in making their norms more visible to prospective newcomers by posting explicit guidelines, highlighting exemplars and providing a public trace of moderator actions visible to prospective newcomers.
Considering that users adjust to toxicity norms quickly and users don’t carry forward the toxicity norms of one community to another, this presents an interesting picture of average behavior of users on Reddit: on average, users’ toxic behavior in one community is not indicative of their behavior outside of that community. This supports (Chandrasekharan et al., 2017)’s finding that after Reddit banned hate subreddits in 2015, members of those subreddits did not engage in similar hate speech in other subreddits that they subsequently participated in.
Details:
- Identified political subreddits on Reddit.
- Identified a subset of political communities with stable and distinct toxicity levels. Used Perspective API toxicity classifier to identify toxic comments.
- Quantified the strength of different norm conforming processes: self selection (some effect), pre-entry learning (large effect), selective retention (small effect), post-entry learning (no real effect).
How I began this work:
When I started this project, I was generally curious about this hypothetical scenario:
What if we randomly assigned users to “nicer” political subreddits, will it get them to have better political discussions in other subreddits? This slowly morphed into a question about norms, conformity and spillover effects. The answer to my original question is no, based on this work, it doesn’t appear that users will improve how they have political discussions by simply participating in a nicer subreddit, rather users temporarily and quickly adjust to the community norms. (Also this idea is bad in so many different ways, more work for the moderators of nicer subreddits, affects member experience in these subreddits etc.)
Why do you think newcomers conform so quickly to these norms?
One possible explanation for high pre-entry learning is that toxicity norms are relatively easy to grasp by observation. Users can probably tell how much a community tolerates toxicity through a glance of the community’s rules as well as the past comments made by existing users. Unlike more complicated norms such as the norm of “suspended disbelief” in r/NoSleep subreddit which “requires all commenters to act as if stories are factual” (Kiene, Monroy-Hernandez, and Hill 2016), toxicity norms likely take less time and effort to absorb, leading to high pre-entry learning.
Another reasonable explanation is the role of lurking in learning norms of the community (Preece, Nonnecke, and Andrews 2004). Users may lurk and learn the norms over a long period of time before actually participating in the conversations. So while it appears that they immediately match the norms of the community, this adjustment could be the result of lurking for a long time.
How does this result square off with some subreddit moderators choosing to pre-emptively ban users who participate in other subreddits?
The short answer is I don’t know. Here are my thoughts on this: We use the Perspective toxicity classifier which detects only explicitly toxic comments. If the issue is that many users from certain communities engage in brigading through behavior such as concern trolling which aren’t going to be detected as toxic by a classifier, then the results from this study don’t apply. Moderating decisions are complex and not just related to explicit toxicity.
One strand of criticism is that these communities are not open to contradicting opinions and are hence a “circlejerk”. I don’t believe that homogeneous communities are inherently bad, such interactions have important functions in a deliberative democracy, see below from (Mansbridge et al., 2012):
“Activist interactions in social movement enclaves are often highly partisan, closed to opposing ideas, and disrespectful of opponents. Yet the intensity of interaction and even the exclusion of opposing ideas in such enclaves create the fertile, protected hothouses sometimes necessary to generate counter-hegemonic ideas. These ideas then may play powerful roles in the broader deliberative system, substantively improving an eventual democratic decision.”
Meta comments:
This paper was rejected twice before at other conferences and went through a round of revision before being accepted at ICWSM 2020. I think the reviewer comments were super useful in shaping the paper. While the main analysis remained pretty much the same, the major changes we made since the very first version was related to framing, connecting to previous work, Perspective API validation on Reddit and the part on handling moderator removed comments. This final version of the paper is much more well rounded than the previous versions and I’m thankful for the solid feedback I received.