The Importance of Volunteer Community Moderators

Joseph Seering
5 min readDec 12, 2018

--

I recently published “Moderator Engagement and Community Development in the Age of Algorithms” in New Media and Society (author’s copy available here), working with my co-authors Tony Wang, Jina Yoon, and Geoff Kaufman. This project took nearly two and a half years from start to finish and is very close to my academic heart, so I’m writing this to provide some context about the work. In brief, we present the results of 56 interviews with volunteer community moderators across three platforms about how community moderation works.

There are three major points that we wanted to make with this article:

1. The model of commercial content moderation used by companies like Facebook and Twitter is not the only way moderation is being done, and it is not the only way moderation can be done in the future.

Much recent academic writing and popular press has focused on the power that platforms have over speech and also the strategies that they have developed for enforcing rules. This is important to discuss; as noted by Gillespie in his Custodians of the Internet and Klonick in her “The New Governors”, a very small number of people currently have the power to impact speech on a massive scale, and they are not required to explain how they make the choices they make. “Effective” top-down moderation of a massive platform like Twitter also happens to be an impossible task at present, and it doesn’t seem to be getting any easier.

Despite this public focus, communities on a number of platforms like Reddit, Twitch, Discord, and even the Groups part of Facebook have quietly been moderating themselves, largely without much intervention from above. Using some back-of-the-envelope math, we can estimate that there are somewhere between 1–2 billion users in (mostly) self-governed communities on the major Western platforms including those listed above, and many more users are impacted by what these communities produce (e.g. Wikipedia). Rather than being edged out by “big governance”, these spaces are actually growing very quickly.

2. Moderator engagement is extremely important to the development of communities.

The core of this paper is a model we present based on our interviews on how content moderation happens in these moderator-driven online communities. We divided our model into three processes in which moderators (and sometimes non-moderator community members) engage:

  • Being and becoming a moderator
  • Moderation tasks, actions, and responses
  • Rules and community development

The bulk of the paper is dedicated to explaining these three processes.

One major takeaway is that discussions between moderators (and sometimes community members) about what types of behaviors are acceptable lead to community evolution. As communities grow, users with new values, backgrounds, and expectations are introduced and often behave in ways that moderators didn’t anticipate. Figuring out how to deal with this growing diversity of behaviors is often what makes or breaks a community. It’s also a clear indicator of how far along a community is in its development. If moderators have stopped debating on what’s acceptable and what’s not, the community has probably begun to stagnate.

3. Moderators don’t always want algorithmic help

We’ve all heard variations of the idea that AI will replace humans in moderation tasks in the near future, or that humans will only be involved until algorithms get good enough. Beyond just the objection to this raised above — that without human discussion about decisions, communities cease to develop — we found that many moderators actually didn’t want more tools to automate all of their tasks. Most moderators were okay with automated tools that caught the most obvious content (e.g. malicious links, extremely overt racism), but they wanted to reserve the difficult decisions for themselves. It’s important to them that they have the power to make these context-specific decisions that commercial content moderation struggles with. When bots, filters, and algorithms sweep content under the rug and away from human community moderators’ eyes, these moderators lose the opportunity to make tough decisions in situations that aren’t so black and white.

I debated (almost a bit too much) with my co-authors and reviewers about the title of this new paper, but in the end we decided to keep the “in the Age of Algorithms” piece. It’s intended to be a bit provocative; given the public conversation, one might expect this paper to conclude that algorithmic moderation has stifled users’ ability to self-govern or that algorithms are replacing human self-governance in these spaces. The reality is that the vast majority of the moderators we interviewed believe that platform admins have no idea their community exists and have no interest in interfering as long as communities don’t cause too much trouble. Aside from flexible tools given to moderators (or ones they create), these spaces are mostly untouched by algorithmic governance.

Broadly, the point that we hoped to make here is that it’s important to explore both through research and public discourse how users moderate and self-govern. This is an integral part of the online moderation ecosystem that hasn’t received as much attention yet as it should have, and we hope to provide a guide and some new insights to help spur this conversation.

***

There are a few issues here that we would have liked to be able to discuss in more depth but weren’t able to fit into the word count. First among these are the labor issues involved in moderation, both with commercial content moderators and volunteer moderators. Sarah Roberts has written and spoken extensively about the first, and J. Nathan Matias and Kat Lo have written and spoken about the second. With regard to volunteer community moderators, the major question is what obligation platforms have to them; platforms essentially profit from the work of volunteer moderators but do not explicitly compensate them. This is important to discuss, but we note in the paper as a starting point that while most community moderators are aware of this issue and some wish they were at least recognized for their work, all felt that they derived personal value from helping communities that they care about grow and develop.

The second issue that deserves more discussion is the interplay between development of platform culture and how communities self-moderate. This is an issue that deserves papers (or books) of discussion beyond what we say in this paper, but fortunately there has already been good work in this space. Adrienne Massanari’s book on Reddit is noteworthy in this regard.

--

--