A proven innovation could benefit Facebook’s users-and its shareholders, too.

Pete Forsyth
Wiki Strategies
Published in
10 min readSep 10, 2018

--

Concern about social media and the quality of news is running high, with many commentators focusing on bias and factual accuracy (often summarized as “fake news”). If efforts to regulate sites like Facebook are successful, they could affect the bottom line; so it would behoove Facebook to regulate itself, if possible, in any way that might stave off external action.

Facebook has tried many things, but they have ignored something obvious. It’s something that has been identified by peer reviewed studies as a promising approach since at least 2004…the same year Facebook was founded.

Instead of making itself the sole moderator of problematic posts and content, Facebook should offer its billions of users a role in content moderation. This could substantially reduce the load on Facebook staff, and could allow its community to care of itself more effectively, improving the user experience with far less need for editorial oversight. Slashdot, once a massively popular site, proved prior to Facebook’s launch that distributing comment moderation among the site’s users could be an effective strategy, with substantial benefits to both end users and site operators. Facebook would do well to allocate a tiny fraction of its fortune to designing a distributed comment moderation system of its own.

Distributed moderation in earlier days

“Nerds” in the late 1990s or early 2000s-when most of the Internet was still a one-way flow of information for most of its users-had a web site that didn’t merely keep them them informed, but let them talk through the ideas, questions, observations, or jokes that the (usually abbreviated and linked) news items would prompt. Slashdot, “ the first social news site that gained widespread attention,” presented itself as “News for Nerds. Stuff that Matters.” It’s still around, but in those early days, it was a behemoth. Overwhelming a web site with a popular link became known as “slashdotting.” There was a time when more than 5% of all traffic to sites like CNET, Wired, and Gizmodo originated from Slashdot posts.

Slashdot featured epic comment threads. It was easy to comment, and its readers were Internet savvy almost by definition. Slashdot posts would have hundreds, even thousands, of comments. According to the site’s Hall of Fame, there were at least 10 stories with more than 3,200 comments.

But amazingly-by today’s diminished standards, at least-a reader could get a feel for a thread of thousands of messages in just a few minutes of skimming. Don’t believe me? Try this thread about what kept people from ditching Windows in 2002. (The Slashdot community was famously disposed toward free and open source software, like GNU/Linux.) The full thread had 3,212 messages; but the link will show you only the 24 most highly-rated responses, and abbreviated versions of another 35. The rest are not censored; if you want to see them, they’re easy to access through the various “…hidden comments” links.

As a reader, your time was valued; a rough cut of the 59 “best” answers out of 3,212 is a huge time-saver, and makes it practical to get a feel for what others are saying about the story. You could adjust the filters to your liking, to see more or fewer stories by default. As the subject of a story, it was even better; supposing some nutcase seized on an unimportant detail, and spun up a bunch of inaccurate paranoia around it, there was a reasonable chance their commentary would be de-emphasized by moderators who could see through the fear, uncertainty, and doubt.

At first blush, you might think “oh, I see; Facebook should moderate comments.” But they’re already doing that. In the Slashdot model, the site’s staff did not do the bulk of the moderating; the task was primarily handled by the site’s more active participants. To replicate Slashdot’s brand of success, Facebook would need to substantially modify the way their site handles posts and comments.

Going meta

Distributed moderation, of course, can invite all sorts of weird biases into the mix. To fend off the chaos and “ counter unfair moderation,” Slashdot implemented used what’s known as “meta- moderation.” The software gave moderators the ability to assess one another’s moderation decisions. Moderators’ decisions needed to withstand the scrutiny of their peers. I’ll skip the details here, because the proof is in the pudding; browsing some of the archived threads should be enough to demonstrate that the highly-rated comments are vastly more useful than the average comment.

Some Internet projects did study Slashdot-style moderation

For some reason, it seems that none of the major Internet platforms of 2018-Facebook, Twitter, YouTube, etc.-have ever experimented with meta-moderation.

From my own experience, I can affirm that some projects intending to support useful online discussion did, in fact, consider meta-moderation. In its early stages, the question-and-answer web site quora.com took a look at it; so did a project of the Sloan Foundation in the early days of the commentary tool hypothes.is.

If Facebook ever did consider a distributed moderation system, it’s not readily apparent. Antonio García Martínez, a former Facebook product manager, recently tweeted that he hadn’t thought about it at length, and expressed initial skepticism that it could work.

There are a few reasons why Facebook might be initially reluctant to explore distributed moderation:

  • Empowering people outside the company is always unsettling, especially when there’s a potential to impact the brand’s reputation;
  • Like all big tech companies, Facebook tends to prefer employing technical, rather than social, interventions;
  • Distributed moderation would require Facebook to put data to use on behalf of its users, and Facebook generally seeks to tightly control how its data is exposed;
  • Slashdot’s approach would require substantial modification to fit Facebook’s huge variety of venues for discussion.

Those are all reasonable considerations. But with an increasing threat of external regulation, Facebook should consider anything that could mitigate the problems its critics identify.

Subject of academic study

If you’ve used a site with distributed moderation, and a meta-moderation layer to keep the mods accountable, you probably have an intuitive sense of how well it can work. But in case you haven’t, research studies going back to 2004 have underscored its benefits.

According to researchers Cliff Lampe and Paul Resnick, Slashdot demonstrated that a distributed moderation system could help to “quickly and consistently separate high and low quality comments in an online conversation.” They also found that “final scores for [Slashdot] comments [were] reasonably dispersed and the community generally [agreed] that moderations [were] fair.” (2004)

Lampe and Resnick did acknowledge shortcomings in the meta-moderation system implemented by Slashdot, and stated that “important challenges remain for designers of such systems.” (2004) Software design is what Facebook does; it’s not hard to imagine that the Internet giant, with annual revenue in excess of $40 billion, could find ways to address design issues.

The appearance of distributed moderation…but no substance

In the same year that Lampe and Resnick published “Slash(dot) and burn” (2004), Facebook launched. Even going back to the site’s earliest days, the benefits of distributed meta-moderation had already been established.

Facebook, in the form it’s evolved into, shares some of the superficial traits of Slashdot’s meta-moderation system. Where Slashdot offered moderators options like “insightful,” “funny,” and “redundant,” Facebook offers options like “like,” “love,” “funny,” and “angry.” The user clicking one of those options might feel as though they are playing the role of moderator; but beneath the surface, in Facebook’s case, there is no substance. At least, nothing to benefit the site’s users; the data generated is, of course, heavily used by Facebook to determine what ads are shown to whom.

In recent years, Facebook has offered a now-familiar bar of “emoticons,” permitting its users to express how a given post or comment makes them feel. Clicking the button puts data into the system; but it’s only Facebook, and its approved data consumers, who get anything significant back out.

When Slashdot asked moderators whether a comment was insightful, funny, or off-topic, that information was immediately put to work to benefit the site’s users. By default, readers would see only the highest-rated comments in full, and would see a single “abbreviated” line for those with medium ratings, and would have to click through to see everything else. Those settings were easy to change, for users preferring more or less in the default view, or within a particular post. Take a look at the controls available on any Slashdot post:

Where Facebook’s approach falls short

Facebook’s approach to evaluating and monitoring comments falls short in several ways:

  1. It’s all-or-nothing. With Slashdot, if a post was deemed “off topic” by several moderators, it would get a low ranking, but it wouldn’t disappear altogether. A discerning reader, highly interested in the topic at hand and anything even remotely related, might actually want to see that comment; and with enough persistence, they would find it. But Facebook’s moderation-whether by Facebook staff or the owner of a page-permits only a “one size fits all” choice: to delete or not to delete.
  2. Facebook staff must drink from the firehose. When the users have no ability to moderate content themselves, the only “appeal” is to the page owner or to Facebook staff. Cases that might be easily resolved by de-emphasizing an annoying post either don’t get dealt with, or they get reported. Staff moderators have to process all the reports; but if users could handle the more straightforward cases, the load on Facebook staff would be reduced, permitting them to put their attention on the cases that really need it.
  3. Too much involvement could subject Facebook to tough regulation as a media company. There is spirited debate over whether companies like Facebook should be regarded as a media company or a technology platform. This is no mere word game; media companies are inherently subject to more invasive regulation. Every time Facebook staff face a tricky moderation decision, that decision could be deemed an “editorial” decision, moving the needle toward the dreaded “media company” designation.

Facebook must learn from the past

Facebook is facing substantial challenges. In the United States, Congress took another round of testimony last week from tech executives, and is evaluating regulatory options. Tim Wu, known for coining the term “net neutrality,” recently argued in favor of competitors to Facebook, perhaps sponsored by the Wikimedia Foundation; he now says the time has come for Facebook to be broken up by the government. In the same article, antitrust expert Hal Singer paints a stark picture of Facebook’s massive influence over innovative competitors: “Facebook sits down with someone and says, ‘We could steal the functionality and bring it into the mothership, or you could sell to us at this distressed price.’” Singer’s prescription involves changing Facebook’s structure, interface, network management, and dispute adjudication process. Meanwhile in Europe, the current push for a new Copyright Directive would alter the conditions in which Facebook operates.

None of these initiatives would be comfortable for Facebook. The company has recently undertaken a project to rank the trustworthiness of its users; but its criteria for making such complex evaluations are not shared publicly. Maybe this will help them in the short run, but in a sense they’re kicking the can down the road; this is yet another algorithm outside the realm of public scrutiny and informed trust.

If Facebook has an option that could reduce the concerns driving the talk of regulation, it should embrace it. According to Lampe and Resnick, “the judgments of other people … are often the best indicator of which messages are worth attending to.” Facebook should explore an option that lets them tap an underutilized resource: the human judgment in its massive network. The specific implementation I suggest was proven by Slashdot; the principle of empowering end users also drove Wikipedia’s success.

Allowing users do play a role in moderating content would help Facebook combat the spread of “fake news” on its site, and simultaneously demonstrate good faith by dedicating part of its substantial trove of data to the benefit of its users. As Cliff Lampe, the researcher quoted above, recently tweeted: “I’ve been amazed, watching social media these past 20 years, that lessons from Slashdot moderation were not more widely reviewed and adopted. Many social sites stole their feed, I wish more had stolen meta-moderation.”

All platforms that feature broad discussion stand to benefit from the lessons of Slashdot’s distributed moderation system. To implement such a system will be challenging and uncomfortable; but big tech companies engage with challenging software design questions routinely, and are surely up to the task. If Facebook and the other big social media companies don’t try distributed moderation, a new project just might; and if a new company finds a way to serve its users better, Facebook could become the next Friendster.

This story, originally published at https://www.linkedin.com on September 10, 2018, is the last of a three-part series. The first two were:

--

--

Pete Forsyth
Wiki Strategies

Wikipedia expert, consultant, and trainer. Designed & taught 6 week online Wikipedia course. Principal, Wiki Strategies. http://wikistrategies.net/pete-forsyth