Emotional Labor and Diversity in Community Management

Jeremy Preacher
16 min readMar 21, 2016

--

At GDC this year, I gave a talk at the Community Management Summit on emotional labor and diversity. It went well, and I wanted to put it out there for those folks who couldn’t afford to go, or who are community managers but not in the games field. Warning: since this was a GDC talk, I use game metaphors to explain some of the concepts — I think they’re reasonably transparent, but I’ve been playing video games since I was five, so I have no perspective on the matter. This is a lightly rewritten and expanded version of the talk — the thing itself should be available in the GDC Vault when they open that up, and if I can make it widely available, I will. (Very late edit: you can find the slides here!)

A bit about me — I’ve been doing community management, in one form or another, since 2001. I’ve worked for folk-rock bands, game developers and publishers, and for the last five years I’ve been at a website called Metafilter, which some of you may have heard of.

For some context, and because it is my most recent (and because it’s not in the game industry, my longest) experience, let me tell you a bit about Metafilter. It is a general-topics discussion site with around 15 thousand active monthly users and over two hundred thousand total signups. We have a community blog (Metafilter proper), a Q+A site (AskMe,) and a newish media discussion section (Fanfare), although, ironically, we don’t have a video game discussion section — that’s handled at an offshoot site, MeFightClub, run by a Metafilter member and moderated in a similar spirit. We have 24–7 moderation, with four full-time and two part-time moderators. While Metafilter is a pure community site — there’s no product it’s built around, nothing we sell — the structure and demographics parallel game communities pretty closely. It’s just *all* community management.

Since this is a diversity talk, I need to offer a disclaimer: Metafilter is not a paragon of diversity. We don’t have a ton of demographic information, as we don’t require anyone to give it, but we have a couple of surveys done by a PhD candidate (PDF link) in 2010 and 2014, which give us some data. It’s clear that Metafilter is mostly American, Caucasian, and middle to upper-middle class, with a strong interest in tech — very similar to a typical video game community. We do have a pretty solid LGBT population — all four of those letters, plus a few more from the extended acronym. So why am I giving a talk on diversity? It’s a good question!

One of the advantages we have over most game communities, especially most corporate-run ones, is that sex, politics, and religion — and race, social theory, relationships, all of the really sticky subjects — are not just allowed, they’re our bread and butter. So the dynamics that affect different populations in our community don’t have to remain subtext — they’re text. We discuss them explicitly with our members, and we change our moderation practices based on those conversations. In 2010, our survey data put us at 63% male, 35% female, 2% declined to state. In 2012, we were at 51% male, 43% female, 4% other (a new category) and 2% declined to state. This is a pretty big change, and I’m willing to credit it to some of the some concrete lessons we have learned about how, within a population that is majority… majority, to hear more voices than just the loudest and most confident, and how to encourage all our members to feel comfortable speaking up.

The framework for the techniques I’m going to discuss today is the concept of emotional labor. It’s made a few laps around the internet recently, notably here at the Toast. Emotional labor in general is the work done to keep relationships and groups running smoothly — keeping track of other people’s moods, preferences, needs, and schedules. If that sounds familiar, it’s because it’s at least half of successful community management.

When you’re looking at your community, think of emotional labor as a stacking debuff. It’s not a bad thing, especially for the people benefiting from it, but it is work, and doing it has an energy cost. There are some debuffs that everyone gets — being new, for example. You have to put out a bunch of effort to learn the customs, the jargon, the personalities. Or if your community in general pushes back against certain kinds of interactions — role-playing, or noob questions, or class balance. Those debuffs will land on anyone who wants to participate that way. And then there are some that affect certain kinds of people. I’m sure you’ve all encountered communities where women had to deal with sexualized comments or put-downs, places where anti-gay slurs were typical, that sort of thing.

Today I’m going to spend a lot more time talking about the former kind of debuff than the latter. Because the thing is, these debuffs stack. And one of the things we’ve learned at Metafilter is that targeting the visible, specific burdens isn’t enough. (And they’re the easy part. You don’t need me to tell you to ban “gay” as a synonym for “stupid”.) What makes a bigger difference is managing the subtle, large-scale patterns that turn the specific challenges that everyone with a marginalized identity learns to deal with into an insurmountable barrier to participation in your community.

The thing with emotional labor is that it’s generally invisible — both to the people benefiting from the work, and to the people doing it. People who are good at it tend to do it unconsciously — it’s one of the things we’re talking about when we say a community manager has “good instincts”. What I’m going to do today is make that unconscious process visible, and show you a couple of examples of how you can improve your community by looking at situations in terms of emotional labor — who’s doing it, who’s really not doing it, and who should be doing it. This will hopefully give you a new strategy for dealing with sticky situations, and a new angle from which to examine your large policies. The result of which, I hope, will be a friendlier community for everyone, not just the folks who play on easy mode.

So! On to the examples.

First, let’s talk about trolls. The classic definition of the troll is someone who says deliberately outrageous things to provoke a response. right? That kind of troll we can all handle. They’re annoying, but usually not that subtle. The trickier case is the person who’s always saying outrageous things *sincerely*. And they’re always baffled when they get a negative response. Most everyone can think of one or two of these.

This person tends to get into a cycle of hostility. They say their piece, get a negative response, which makes them react defensively, which gets an even worse response, etc etc until they storm off, are banned, or, worst, just repeat. So what do you do with them? There are a couple of possibilities. One solution, other than banning (which I trust you all know how to do), is usually referred to as “don’t feed the troll”. It’s where you come down on the people reacting to your problem child. This might be framed as “avoiding negativity” or “stopping a pile-on”. The end goal is to have the problem child say their outrageous things with zero response. This… works, sort of.

The problem with not feeding the troll is that, first of all, it’s a lot harder to control fifteen thousand users than one. That’s just logistics. But when you look at it in terms of emotional labor, the disadvantage becomes clearer. What you’re asking is that everyone else take a debuff whenever your Problem Child opens his mouth. They have to recognize the bait, remember that this is “that guy”, and control their reaction. At the same time, you’re letting the problem child operate with only his natural disadvantages. This isn’t fair in a general sense, but to hell with fairness. What it does, in the end, is increase the burden for the good actors in your community. Which is obviously not ideal. So instead, let’s talk about solution #2, the emotional-labor-savvy way to handle it.

First we’ll reframe the problem: the real issue is not Problem Child’s opinions — he can have whatever opinions he wants. The issue is that he’s doing zero emotional labor — he’s not thinking about his audience or his effect on people at all. (Possibly, he’s just really bad at modeling other people’s responses — the outcome is the same whether he lacks the will or lacks the skill.) But to be a good community member, he *needs* to consider his audience. He needs to turn off the debuff aura. So what we can do is give him a framework he can use to succeed, both on our terms and on his.

First, have him identify — or identify for him — his audience. Does he actually want a dialog with your community? Or does he just want to register his thoughts with the devs? If the latter, point him at a private channel — a suggestion box, an email, Twitter DMs, anything that he won’t get into debates on. In my experience, this suffices for about a quarter of the cases. (We often have people who cannot stay out of arguments on Metafilter who are perfectly-behaved users on AskMe, which prohibits debate between answerers.)

Second, provide guidelines that eliminate the worst of the tone problems (the “don’t be an asshole” rule works great until you encounter someone who doesn’t know how to pull that off.) Yes, this takes resources on your part, and you may not have them. (In which case, see above re: private channel or banhammer.) But if you have some general tools, you can hand them out as a second-warning step and it’ll help at least some cases. Things to try:

  • Lists of problem or flat-out forbidden words: this goes beyond slurs to things like “always” or “never”, your local equivalent of “noob” or other disparagement, the category-classes the Problem Child likes to lump people into. The ones we see a lot at Metafilter aren’t gaming related, they’re things like “hipster”, “SJW”, “men’s rights activist,” all of which make people feel dismissed or mischaracterized, two debuffs that tend to prevent people from responding civilly.
  • Another thing I’ve had good luck with are “I statements”: This sounds hippy-dippy, but “I want this change” is a much less arguable statement than “The game needs this change”.

A moment for a mini-rant: when you make a blanket statement about a broad group of people, the only way it will consistently go well is if the people you’re talking to aren’t members of that group, or they all agree with you completely. To continue the metaphor, you get nothing if the people you’re talking to aren’t members of the group, a “Right on!” buff if they agree that makes everyone feel more strongly tied to you and the group, or a “identity challenge” debuff if they are a member of the group but don’t agree. Years ago, I made a joking comment to a group of CMs about how we all drink too much. This is certainly true in some cases!. But a few folks took it very badly indeed, which confused me at the time. Now, having seen that dynamic play out over and over and over again in the subsequent decade, I get it. When you make that kind of statement, some people are going to read it as “if you don’t agree with me, you must not be a real…” community manager, or whatever. Which is gonna start a fight. And the worst part is, it’s often a fight with someone who would agree 100% with a more limited version of your statement.

Point is, “I” statements. Use them. Make your community members use them. They work.

One more thing that works well: If you have two people who reinforce each other’s bad behavior, forbid them to interact. You have to enforce it on both sides, or you end up with ten times the tantrums you started with, but it’s a simple way to solve a recurring problem.

These are just examples, of course, and yours should be specific to the actual user and situation. They take resources, as I said, but if you do some of the analysis ahead of time and figure out the patterns common to your players, you can build some more or less standard responses that your team can use with a little customization. The end goal is for the instigator of fights to be responsible for them, so everyone else can participate without getting sucked in to conversations you’d prefer didn’t exist.

The next example is a complex one: the tone argument. If you’re on the internet, you’ve probably at least heard it referenced. The basic idea is that someone makes a passionate statement, and the response they get is “You’re obviously upset, so I don’t need to take you seriously.”

A related angle is “that’s not a problem for *me*, so it’s not a problem”. They’re the same thing, really, although this one is subtler than a straight-up “U mad, bro?” Other variations include “Lern 2 play” or “play a real class,” or anything else that suggests that the complaint is invalid for reasons other than its merits. There are some differences between these in theory, but in practice they all amount to a debuff on the original poster, so I’m lumping them together. These patterns ensure that the people who are most invested often get the least respect in conversation. These debuffs deter new voices, suggestions, or feedback — fundamentally, all of these patterns reinforce the status quo, including who gets to talk and who gets listened to.

Because this is a very common set of patterns, and very much has diversity implications, I’m going to spend some time on it. One of the things I want to address first is using conversation spaces to generate user feedback data. Folks who largely rely on surveys, direct feedback, and in-game user data will be less affected, but it’s still worth thinking about, because in that case, *you* are often the conversation partner who can follow these patterns unconsciously. But I’m mostly talking about reading through your forums and generating lists of suggestions or priorities for the dev team. I’ve done this at every game company I’ve worked at, with different tools, and it can help you see trends free of the emotional content on your forums. But you have to be getting your data from a place that doesn’t bias the feedback in the first place. If your forums display any of these patterns, you’ll be getting misleading data, because it will primarily be from your core demographic, whatever that happens to be, and following only that feedback will tend to narrow that core further. And I think very few of you are tasked with making sure your userbase grows smaller over time. Let’s be honest, that’s a big part of industry conversations about diversity — we all want to be good people, but we also want to sell more boxes.

So, solutions. The obvious solution, which I won’t belabor, is “criticize the idea, not the person”. This includes remarks about apparent mood, investment, skill, knowledge, etc. Not complicated to implement, but you have to enforce it consistently. If you have a moderator whose judgement you trust reading most of your forums in realtime, then you’re good. But that’s a luxury a lot of us don’t have, and I’ve found that this is a concept that it’s hard to train your users to flag for your moderators. Metafilter relies on a user-driven moderation model — there aren’t enough of us to moderate proactively, so we spend a lot of effort encouraging our users to bring problems to our attention. This particular dynamic, however, tends to suck people into the debate before they realize they should probably just flag it. If you can pull that bit of user education off, I’ll come to your talk next year. If not, let’s talk about a simpler and more clear-cut solution.

That would be the idea of negative comments vs negative space. Instead of leaving negative comments which act as that stacking debuff, you train your users (via simple moderation) that if an idea or a comment doesn’t apply to them, or they think has no value, they should leave negative space. They just move on. They skip it. That way a comment thread remains about the thesis, not about the virtue or lack thereof of the poster, or whether they represent everyone. (This of course leaves room for constructive criticism — which should always be welcome.) Remember — your users aren’t in a democracy, and they don’t all have to vote on everything.

The reverse of the tone argument is worth a brief investigation, too. That would be the “I’m justifiably pissed and therefore I can say whatever I want, however I want.” Y’all may not see this as much as I do — it comes up a *lot* in social-justice situations, where a marginalized person is venting, and claims that any criticism at all is inherently a “tone argument”.

Here’s the thing: We don’t, or shouldn’t, moderate on the basis of on-topic content. An idea, a suggestion, a reported experience — within broad guidelines, they should all be fair game. We do, and must, moderate tone. How you make your statement is what we, as moderators, are looking at. Uncivil discourse is as big a problem for the community as reflexive dismissiveness. And public company assets are not a venue for venting. However, that doesn’t mean you can write these people off as bad actors, however tempting that may be. They’re usually shouting because they feel like they can’t be heard otherwise. Their debuff stack has gotten too big and it’s triggered their Enrage timer.

They might be reacting to something outside of your community, and there’s not much you can do there other than make sure they understand the rules of discourse. But what’s setting them off might be something that is your responsibility. Maybe they feel like if they talk in a normal tone, they get shouted down or ignored. That is our problem. Implementing some of the solutions we just talked about can prevent people from getting to that point. Or it might be they have a valid grievance that, for whatever reason, you can’t fix.

The first game I worked on was Lineage 2, back in ’05 maybe? And we had a botting problem. A bad botting problem, and one we couldn’t quickly fix. The team was negotiating with all the departments involved to come up with a solution, but it was slow. By the time I came on, the game was 18 months old, and they’d been complaining about this since day 1. They were cranky. And increasingly uninterested in being civil about it.

I couldn’t fix the problem. But it was my job, not to fix it, but make the players feel heard. And to prevent this issue from souring the whole community. What we ended up doing — and I can’t take credit for the idea, it probably wasn’t mine — we gave them a dedicated space to complain in. This removed the “Somebody’s Yelling!” debuff from the rest of our forums, was a visible sign of our interest in the issue, and incidentally made the scale of the problem much easier to convey up the chain. The problem didn’t end until we started banning bots, but it drastically reduced the totally justified but still unacceptable rage on the forums.

The point of this is that it’s your job, when confronted by an angry user, to do the emotional labor to figure out where they’re coming from. It may not be one you can solve and all you can do is communicate the expectations for participation in your community. But it may be the cause is a systemic problem in your community preventing people from feeling heard, and that you’d be well-served to fix.

A brief aside on the subject of dedicated complaint spaces, which I couldn’t quite fit into the original talk but I discussed briefly at the event by popular request. Metafilter’s moderation system relies on MetaTalk, an open section where a user can post a request, a complaint, or a suggestion about how we run Metafilter at any time. It is our #1 most valuable tool, and where we develop most of our insights and practices. We’ll publicly discuss any moderation decision up to the limits of user privacy, hash out design changes, talk about large-scale patterns and small-scale problems, at any time. It is a GIANT RESOURCE SUCK, don’t get me wrong — we lost a couple of staff a few years ago and we had to implement a queue so we didn’t have high-touch threads stacking up, but we still discuss anything privately and nearly anything publicly. This is not a solution that’s cost-effective if your team is already overstretched, but it’s a technique that has a lot of benefits.

One of the primary advantages is transparency — anyone can look up the whole history of our policy changes. Since our site is moderated on a human-judgment, case-by-case basis, this is very helpful — it’s not like we have a hard list of rules to point people at. Another is user investment — we can get people on board with a new policy without having to do much of anything other than participate in a discussion among our users. And a third is what I mentioned earlier — we get to talk openly about the power dynamics, demographic weirdnesses, and large-scale patterns in our community, and making them visible makes it possible for us to change them. I can’t tell you that something like this is right for your community — it requires a major staffing commitment. But it’s a concept that’s worth thinking about, and might be useful even in a limited implementation around times of change. Don’t be afraid to take criticism, listen to it, and change your mind in front of your community — they’ll respect you for it, and you’ll do a better job.

So, hopefully by this point you have an idea of what emotional labor is and how to use it to analyze situations. You may be asking “how does this bring different types of people into the community?” Here’s the thing — one of the privileges — and I use that word deliberately — of being the “default” type of person, whether that’s gender, race, class, or even playstyle, — is that you have to do less emotional and mental work just to exist in a given space. Your speech habits go unremarked, your assumptions are unchallenged, and your experiences are given the most weight. That means everyone who is not the default has to do a whole bunch of work just to participate. When you think about your community in terms of the emotional labor it takes to join, to participate, and to be accepted, you can start to see where those burdens fall, and how you can mitigate them.

I said I wasn’t going to talk much about specific debuffs, mostly because I’m not an authority on any that don’t directly affect me. So ask different types of folks what their experience is like in your community. If you can’t find any of these people to ask, then it’s likely you have a broader problem, and should think about some of these large-scale patterns and solutions first.

And let me be clear — I’m not talking about marketing. Acquisition may or may not be part of your responsibilities, but I’m talking about retention, and more than that, participation. We’ve all got a long tail of lurkers — people who love your game, but don’t talk in chat, don’t engage with your social media, don’t post on your forums. What all of these strategies do, what thinking about the emotional labor cost of participation adds up to, is make space for your lurkers to join in. You already have female players. You have gay players, you have trans and nonbinary players. You have furries. It’s not enough to invite them to talk — you have to remove the obstacle course on the way to the microphone, too.

Thanks for reading, everyone! I hope this sparks some discussion and gives folks new ideas for managing their communities, gaming or otherwise.

--

--