It looks like Reddit’s solution to their trolling problem is going to be all stick and no carrot. “More powerful tools for mods” are on their way, meant to back up some new language in the terms of service. If you’re wondering how this is different from every other failed attempt to reign in the internet of hate, I’m right there with you. Moderation isn’t the right shovel to move this mountain of bullshit.
Notice I’m not arguing “Freedom of Speech” blah blah blah. I’m all for censoring racist, sexist, hateful speech. The marketplace of ideas doesn’t benefit from it. The electorate isn’t improved by it. They’re just toxic memes.
No, my objection is that top-down censorship isn’t an effective means of policing a content community. It’s slow to take effect, inconsistently applied, and can actually increase unwanted behavior in the short term. (But don’t take my word for it.)
If we really want more open, welcoming, and pro-social online communities, we have to understand the psychology of antisocial behavior.
Social psychologist Bernard Guerin spent a decade studying the underlying causes of antisocial behavior in groups. It all boils down to an equation that should look real familiar to anyone who uses the internet…
anonymity + group identity = antisocial behavior
In his experiments, group identification acted as an amplifier for individual behavior. When put into groups, people tended to act more antisocial than anonymous individuals acting alone. The reason has a lot to do with in-group favoritism, or very tribal tendency to treat “our people” better than “those people.”
Moderation isn’t the right shovel to move this mountain of bullshit.
In the 1970s, social psychologists found they could induce this dickitry with basically nothing at all. People sorted into groups based on eye color would do it. So would people put into groups based on randomly assigned, and totally meaningless, test scores. Tribalism seems to be hardwired into our brains.
The internet makes things worse by putting a nice, faceless buffer between ourselves and other people. We may be posting in public, but it’s all too easy to imagine we’re just talking to our peers. Our confidantes. Our fellow hate-mongering shitbags.
Anyway, the combination of the two is toxic, but there’s good news. Group identification isn’t inherently bad. When combined with accountability to members of the out-group, it amplifies pro-social behavior, too.
In one experiment, participants allocated “money” (meaningless tokens) to other participants. They never met these other participants and had no reason to believe that biased allocations would be reciprocated (because they never received tokens as part of the experiment). Yet, even when their group assignments were arbitrary, people still allocated more tokens to their in-group members.
They shorted people for essentially no reason. Hell is other people.
However, simply telling participants that their decisions would be recorded and reviewed by members of the out-group all but eliminated biased behavior. Sometimes, it really is that easy.
We have to understand the psychology of antisocial behavior.
The Player Behavior team at League of Legends has been doing some really groundbreaking work with player tribunals and automated language analysis that has dramatically reduced racist, sexist, and homophobic content in their games. Theirs would be a great example for Reddit to follow.
But what if empathy was baked right in to the way we use social software?
What if, when you visited a subreddit, the first thing you saw was a visualization of the words that most characterize its content? In Dataclysm, Christian Rudder — co-founder of OKcupid — performs just this kind of analysis on thousands of dating profiles. (The results are interesting. For example, liking MST3k seems to be prototypically white and male.)
A feature like this would hold contributors publicly accountable for their language. Even if that wasn’t enough to moderate their behavior, it would still help visitors assess a subreddit’s character better than a “NSFW” warning.
What if members of unassociated subreddits were periodically invited to audit each other? (Two subreddits’ degree of association could be calculated with a cluster analysis. People do it all the time for Twitter.) They wouldn’t be asked to crash each other’s parties, but read through a random sample of comments and provide qualitative feedback. Just knowing that this is a thing that happens would create an environment of out-group accountability.
My Advice to Reddit
It’s not that online communities are inherently unruly. It’s that we design them in ways that promote bad behavior by combining anonymity with group identification.
If we want our communities to be more open, welcoming, and productive places, we need to start…
- Tracking unwanted behaviors.
- Making those metrics publicly visible.
- Providing victims with better channels of communication.
Top-down moderation isn’t the answer. Neither is eliminating anonymity. If we can make users more accountable to members of their out-groups, our hardwired tribalism could actually drive more pro-social behavior than we’ve ever seen before.
We tried not feeding the trolls. We tried slapping them across the face. Let’s try tapping them on the shoulder and reminding them they’re human.
(For more details and citations on the experiments mentioned above, see my article on Social Behavior Design.)