Bad comments are a system failure
So why can’t you fix them like any other bug?
Internet comments are awful. Recently sites [like Popular Science, Bloomberg Business, Reuters, Mic, The Week, re/code, The Verge, and now The Daily Dot] have been giving up on hosting local comments altogether. Blaming trolls and spambots and the shift in engagement to platforms like Facebook, Twitter and Reddit, you can almost hear the sighs of relief in their articles discussing these decisions.
This “What can you do? People are awful amirite!” attitude towards comment sections is fatalistic and misguided. If you don’t want comments on your website, that’s fine, don’t have them. But don’t act like comments are some sort of intractable problem that can’t be realistically addressed by mortals. They’re not. There are only a few reasons why most internet comments sections are terrible, and real-world solutions to those problems. Be honest: you could fix this, but your priorities are elsewhere.
MetaFilter, a site I used to help run for a decade, has maintained a community based on conversation for over fifteen years. It’s nothing but comments. It’s mostly not awful. The Daily Dot’s article claims “No one has quite figured out how to thread that needle” between having a vibrant online community and supporting all voices, yet they then go on to say that “commenting systems take thoughtful moderation and constant development” strongly implying they’ve decided not to have either.
MetaFilter is a community of Internet People, people who spend a lot of time online. Everyone who spends a lot of time online is online for a reason. I am online a lot because I live in a rural area, keep late hours, and want people to socialize with when my town is asleep. And I like making jokes with other nerds who understand and appreciate them. Some people work swing shift or are otherwise time-shifted, are expatriates where people don’t speak their language, are caring for family members at home, only like to interact when they can multitask, have a disability or social anxiety that makes online communication a better option for them, or are just better at communicating through text than face to face. Understanding your community of people who are heavy online users is part of learning how to manage them and help them be their best selves.
No Reset Button
As to why online conversations go badly, they’re often full of specific kinds of people: very smart people; very anxious people; very frustrated people; very verbose people; people with high IQs and low empathy, or sometimes the reverse. And there’s no reset button on their conversations. Even your corner bar closes to kick the drunks out and mop up every night. The assholes and the unruly and the people with social problems that they’re publicly self-medicating with alcohol all have to go somewhere else for a while. This doesn’t happen in most comments sections. What starts out as a small bit of grar can turn into a huge raging shitfest if left to fester unattended for a few days.
Having threads that close, having moderators that redirect entrenched disagreements, giving users timeouts if they can’t get with the program, all of those can help a community reset and get back on track. These are time-tested strategies that work, but they require human attention and are difficult to automate. This means resources, usually money.
Context Collapse
Conflicting conversational contexts leads to a constant restatement of terms, values and endless nitpicking about meaning versus use in that “I didn’t call you an asshole, I just said that assholes talk like that…” way. For years this has been discussed in more academic circles as “context collapse.” You have an identity and a set of ideas about the world that exists and is understood in one social context. You want to bring it to another place and not have to have to do a five minute introduction about who you are and what you value every time you say anything. Other people don’t share the same preset understandings and may read more into what you are saying than you think you put there. Your jokes fall flat, or cause offense. Conversation devolves into side discussions and arguments about first principles and word definitions. People start citing the dictionary and Wikipedia and angrily talking past each other.
The community has to make decisions about its values. Are “101” discussions like What Is Feminism or What Is Racism tolerated, encouraged or out and out disallowed? Is your community a safe space with mechanisms like trigger warnings and spoiler alerts, or not? Are those expectations explicit and enforced by someone who is contactable and respected? Some initial work at creating practical and enforceable ground rules can keep every contentious discussion from turning into a first-principles slugfest.
The Lie of the Self-Moderating Community
This is the dream. Build it and they will come. They will not destroy it because it’s theirs, it’s the internet and it’s super democratic, right? But why should this work online if it doesn’t work offline? The alleged democratic nature of the online world is even more of a myth than it is in the offline world. Online, someone pays the website bills and has the passwords to the back end, someone registered the domain, someone makes money off of the site, someone has admin tools. Those people have more power than the other people in the community, including the power to decide to do nothing with those powers. The decisions they make set the tone of a community more than anything else, more than the user interface, more than the slick design, more than the user population itself.
Non-admin users can certainly build up social capital and power over time. They can become trusted users that other community members look up to and emulate. They can become power users who flag problematic content and communicate about site issues with an admin team. However, they can’t make site-wide decisions and set policy without having the keys to the store; they can’t speak for the site owners, or shouldn’t. Giving volunteer users some admin-like powers without compensating them somehow is a potentially exploitative situation for any site which makes money.
Moderating is about sharing as much of the top-down decision making as possible while at the same time keeping the community from eating itself. Off-the-shelf technology for facilitating moderation is rarely up to the job. Custom tools for a community’s specific needs encourage a feeling of ownership, of buy-in, of specialness. Staff who do the jobs of running the site should be compensated decently to do that and they should be given the right tools to be effective, tools that can grow as the community grows.
Too often moderators are seen as the disposable front-line interference runners who have the unenviable job of communicating policies and decisions that they had no hand in making. Meanwhile, developers and site owners design features for the bottom-line and for the community that they wish they had, not the community they do have. The language of these policies and decisions is always about making the site better for the community when really it’s about making it better for the investors, the owners or the advertisers.
Money paid to website writers is often low, but writers can supposedly trade on this for exposure that will potentially help them get more lucrative writing jobs. The same is not true for moderators. If they do their job well, their work is often almost invisible. Getting paid for invisible product is not a feature of the gig economy. It makes sense that with finite financial and human resources, money goes farther if you spend it on the things that will help you make more money, visible things. All human-mediated interactions — support, moderation, abuse investigation — don’t scale and don’t make money. So, we see sites cutting back on those things but instead of saying “These things don’t make us money.” they say “These things are very hard to do.”
Creating desirable communities is often a matter of a few more people being given real decision-making power. They should also have secure positions from which to communicate those decisions as they interact with their communities. When it’s done right, good moderation is knowing your commenters well enough that you can head off problems before they turn into multi-day public disasters.
Allowing your community members to be harassed and stalked and driven off is unconscionable. Allowing your comments section to turn into the worst parts of the internet is bad for business, or should be. Abdicating the responsibility of good community management makes all of us Internet People look bad and is a completely avoidable failure mode that site owners should work harder at getting right.
Also inspired by @anildash and his article If your website’s full of assholes, it’s your fault