New project: creating better standards for transparency in how platforms moderate content

What content are you allowed to see and share online? The answer is surprisingly complicated. Our new project is funded by the Internet Policy Observatory, and includes Jillian York from, Sarah Myers West from the USC Annenberg School for Communication and Journalism, and Nicolas Suzor from Queensland University of Technology. We work to engage civil society organizations and academic researchers to create a consensus-based priority list of the information users and researchers need to better understand content moderation and improve advocacy efforts around user rights.

The secret rules of content moderation

Search engines, content hosts, social media platforms, and other tech firms often make decisions to delete content, block links, and suspend accounts. The Terms of Service of these providers give them a great deal of power over how we communicate, but they have few responsibilities to be consistent, fair, or transparent.

Content moderation is a difficult task, and the decisions that platforms make are always going to upset someone. It’s little surprise that platforms prefer to do this work in secret. But as high profile leaks and investigative journalism, lsuch as the recently published Guardian ‘Facebook Files’, start to expose the contradictions and value judgments built into these systems, they’re becoming more controversial all the time. As Tarleton Gillespie puts it, the secrecy makes this entire process more difficult and more contentious:

The already unwieldy apparatus of content moderation just keeps getting more built out and intricate, laden down with ad hoc distinctions and odd exceptions that somehow must stand in for a coherent, public value system. The glimpse of this apparatus that these documents reveal, suggest that it is time for a more substantive, more difficult reconsideration of the entire project — and a reconsideration that is not conducted in secret.

The need for transparency

As the United Nations’ cultural organization UNESCO has pointed out, there are real threats to freedom of expression when private companies are responsible for moderating content.

When governments make decisions about what content is allowed in the public domain, there are often court processes and avenues of appeal. When a social media platform makes such decisions, users are often left in the dark about why their content has been removed (or why their complaint has been ignored).

It turns out that we know very little about the rules that govern what content is permitted on different social media platforms. Organizations like Ranking Digital Rights evaluate how well telecommunications providers and internet companies perform against measures of freedom of expression and privacy. In its 2017 report, RDR found that ‘Company disclosure is inadequate across the board’:

Companies tell us almost nothing about when they remove content or restrict users’ accounts for violating their rules. Through their terms of service and user agreements, companies set their own rules for what types of content or activities are prohibited on their services and platforms, and have their own internal systems and processes for enforcing these rules. Companies need to disclose more information about their enforcement processes and the volume and nature of content being removed.

What does ‘transparency’ mean?

While there have been many calls for greater transparency in content moderation decisions, there is little guidance available for internet intermediaries about the types of information they are expected to produce.

This project sets out to build consensus on a practical set of guidelines for best practices in transparency for content moderation practices.

We do this first by undertaking a review of the most common demands from users themselves. Now in its second year, has been collecting reports on users’ experiences when their accounts are suspended or content is deleted. From these complaints, we identify specific measures that intermediaries might be able to take to improve the experiences of users who have either had content removed or requested the removal of another user’s content.

We will then organize a series of workshops at academic conferences and civil society meetings over the next year to produce a prioritized list of specific recommendations for telecommunications providers and internet intermediaries. Because demands for greater transparency have so far been made in general and sometimes conflicting terms, there is little specific guidance about what measures are likely to be most useful.

We’ll be posting more updates here as the project progresses. If you’d like to get involved in this work, please contact Nicolas Suzor at QUT School of Law:



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store