Facebook can’t moderate in secret anymore.

Tarleton Gillespie
Data & Society: Points
5 min readMay 24, 2017

--

The leaked Facebook documents offer a glimpse of the banality of evil, the unresolvable challenges of content moderation, and argue strongly for moderation to come out into the open.

I’ve been trying to write a comment about the leaked Facebook documents published by The Guardian this week. The documents, part of the training materials that Facebook provides to independently contracted moderators, instruct them on what they should remove and what should stay, across a wide variety of categories. But it’s hard not to feel disheartened by them, in too many ways.

They’re hard to read, peppered with examples that almost certainly came from actual material Facebook moderators have had to consider. Evil is banal, and the evil available on Facebook is banal on a much, much wider scale.

But to be honest, there is no way that these guidelines, spread out plainly in front of us, could look good. It’s fine to say something lofty in the abstract, as Facebook’s Community Standards do. I have quibbles with that document as well, but at least it sounds like a series of defensible principles. When it’s time get into the details, it’s simply not going to sound near as noble. The content and behavior Facebook moderators have to consider (and, let’s remember, what users often demand they address) are ugly, and varied, and ambiguous, and meant to evade judgment while still having impact. There’s no pretty way — maybe no way, period — to conduct this kind of content moderation. It requires making some unpleasant judgments, and some hard-to-defend distinctions. Policing public expression and social behavior at this scale requires weighing competing, irreconcilable values: freedom of speech vs protection from harm, avoiding offense vs raising awareness, hiding the obscene vs displaying the newsworthy. Sometimes value to the individual comes at a cost to the public; sometimes value to the public comes at a cost to the individual. Content moderation is an unresolvable and thankless task.

At the same time, Facebook makes a number of really questionable decisions, decisions I would make differently. In a time when misogyny, sexual violence, and hatred are so clearly on the rise in our society, it is disheartening, shocking, that Facebook could be so cavalier about phrases like “To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat” — which gets a thumbs-up in these documents. Facebook is not only a prominent venue for such violence and misogyny, and too often chooses to overlook it. It and other social media platforms are arguably responsible for having called it into being in a new and pernicious form, and have given it room to breathe over the last decade.

I find myself tripping on my own impulse here: I might set the policy differently, but who am I to set such policies? But then again, who are they to do so either? But the real revelation is no single detail within these documents. It is the fact that they had to be leaked. (Just like the content moderation guidelines leaked to SZ in 2016, or the content moderation guidelines leaked to Gawker in 2012). These are secret documents, designed not for consideration by users or regulators, but to instruct and manage the 3000+ independently contracted clickworkers who do the actual moderation work. These criteria, while perhaps crafted with input from experts, have not been made public to users, not benefited from public deliberation or even reaction. A single company — in fact a small team within that single company — have anointed themselves the arbiters of what is healthy, fair, harmful, obscene, risky, racist, artistic, intentional, and lewd.

What is clear from the pages published by The Guardian is that we have spent a decade building up one version of this process, one way of dealing with the fact that social media invite all forms of participation, noble and reprehensible. That one particular way — users post, some users flag, Facebook sets the rules, clickworkers remove, critics push back, Facebook adds exceptions — has simply been built out further and further, as new harms emerge and new features are added.

This is not innovation. It is using a tool conceived ten years ago to handle more and more, to less and less benefit.

The already unwieldy apparatus of content moderation just keeps getting more built out and intricate, laden down with ad hoc distinctions and odd exceptions that somehow must stand in for a coherent, public value system. The glimpse of this apparatus that these documents reveal, suggest that it is time for a more substantive, more difficult reconsideration of the entire project — and a reconsideration that is not conducted in secret.

Tarleton Gillespie is a Principal Researcher at Microsoft Research, New England; he is also an affiliated associate professor in the Department of Communication and Department of Information Science, at Cornell University. On Twitter at @TarletonG

Points/spheres: In “Facebook can’t moderate in secret any more,” Tarleton Gillespie critiques the opacity of Facebook’s content moderation policies, arguing that their process has become too complex and high-stakes to be left hidden from the public. This article is cross-posted from the blog Culture Digitally.

Relatedly, Data & Society recently published a series of six pieces on the networked public sphere that cover similar debates around accountability, information flows, and free speech:

— Ed.

--

--

Tarleton Gillespie
Data & Society: Points

Author of <<Custodians of the Internet>> // employed by Microsoft Research /and/ Dept. of Communication, Cornell University // but speaking from my research.