International Human Rights Law Is Not Enough to Fix Content Moderation’s Legitimacy Crisis

Brenda Dvoskin
Berkman Klein Center Collection
7 min readSep 16, 2020
Smartphone screen with social media app icons
Photo: Pixabay

Should tech companies follow human rights law to govern online speech? This proposal has tremendous appeal. International human rights law can offer a set of rules designed in the public interest with the broad support of a global community. This certainly appears superior to the status quo wherein a handful of CEOs set rules for the speech of billions of social media users. Unsurprisingly, scholars (here and here) and civil society organizations (here and here) have expressed their support and the project has gained a lot of traction since David Kaye — then UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression — promoted it in 2018.

However, adopting international human rights law might not lead to more legitimate content moderation rules. First, international human rights law is not a set of universally accepted rules. The framework favors some speech standards over other reasonable alternatives. The choice for those standards should itself be subject to a legitimate rule-making process. Second, international human rights law is in many areas highly indeterminate. It offers guidance but no precise answers to many challenging questions. In those cases, human rights law might not constrain the power of tech firms but instead only create the appearance of legitimacy. In other words, the proposal could mean business as usual with the added ‘legitimacy-aura’ of human rights law.

International Human Rights Law Is Not Neutral

The International Covenant on Civil and Political Rights (the Covenant) offers the primary international guidance for free expression standards. The Covenant puts a priority on some normative options over others in areas in which reasonable disagreement between legal systems, experts, and communities exists. Two current controversies illustrate this point.

In July, the Stop Hate for Profit campaign brought together hundreds of companies. For one month, they withheld advertisement from Facebook demanding that the tech giant curb the spread of hate on the platform. Among other demands, the campaign requests the removal of groups focused on Holocaust denial. Although sensible and understandable, the request is in tension with Articles 19 and 20 of the Covenant. The Human Rights Committee, the body that authoritatively interprets this international treaty, has said, “Laws that penalize the expression of opinions about historical facts are incompatible with the obligations that the Covenant imposes.” Indeed, in 2019, Kaye explicitly used the bans on Holocaust denial as an example of a law that breaches states’ international obligation to protect freedom of expression.

In other areas, human rights law sides with the campaign’s demands. Stop Hate for Profit also asks that platforms apply their rules equally to politicians and other users. Twitter and Facebook, however, see things differently. During the COVID-19 pandemic, they decided not to delete President Trump’s posts violating their rules on glorification of violence, election integrity, and COVID misinformation. They reasoned that the public interest of citizens in learning what their representatives think in these cases outweighed the harmful effects of such speech. In other instances, Facebook did remove a video posted by Trump’s campaign for spreading COVID-19 misleading information, and both firms took down content that Brazil’s President Jair Bolsonaro had posted in violation of their rules.

In this contentious debate, international law falls much closer on the side of the Stop Hate for Profit campaign. Kaye’s 2019 report (see para. 47) explains that even though exceptions to protect political speech could in exceptional cases be acceptable, in principle politicians and the public ought to be subject to the same rules. According to the report, harmful speech can even be more dangerous when uttered by political leaders. Therefore, there are even stronger reasons to apply speech rules to these figures.

The main point is that reasonable disagreement exists about how to balance these different considerations when governing politicians’ speech. And the choices that international law (or the Human Rights Committee) makes in these debates are not obvious and not universally accepted. They should themselves be subject to the control of the people. Rather than shifting decision-making power from tech companies to the UN (although certainly a step forward), it is urgent to focus on building processes that can actually involve the public in the deliberation over speech rules.

Lending Legitimacy to Unconstrained Power

At the same time, international human rights law leaves many speech questions unanswered. I have written about the contradictions between regional human rights systems that the UN framework does not solve. A more fundamental open question is how to apply the legitimacy requirement of Article 19 of the International Covenant on Civil and Political Rights to content moderation.

According to Article 19, all restrictions to freedom of expression must have a legitimate end. Legitimate ends for governmental restrictions on speech are the protection of national security or of public order or of public health or morals. Evelyn Aswad asks the right questions: Which ends would be legitimate for content moderation rules set by private companies? Could tech companies claim a business interest as a legitimate purpose? And even if they were not entitled to rely on the most explicit commercial interests such as advertisers’ preferences, could these companies claim that a specific content moderation rule helps them shape the type of community they want to foster?

Most supporters of the proposal would acknowledge that it is necessary to let companies disallow content for the purpose of meeting the preferences and expectations of different users. This appears sensible. Otherwise, all the speech that international human rights law protects — including adult nudity, pornography, and many graphic depictions of violence — would likely have to be allowed on platforms such as Facebook. This would render platforms nearly useless to a large set of users that does not want to navigate through all forms of legal, but perhaps undesirable, speech. But the line between permissible and impermissible ends becomes blurry, and the Covenant, designed to be applied to states, definitely does not draw such a line.

As long as no line exists, international human rights law poses few constraints on what companies can do. For any rule a company might wish to set, it could articulate a public interest end that the rule advances. For instance, for nudity rules, tech firms could claim they are trying to avoid all possible non-consensual distribution of intimate images. For hate speech that does not incite violence, they could posit that they are creating a “safe” environment for communities that are disproportionately the target of such speech. And the list goes on. Susan Benesch has proposed helpful guidance to translate the requirements of Article 19 to content moderation. But unless broad consensus can be built around the meaning of terms such as “the protection of moral,” human rights law will lend its legitimate framework and vocabulary without meaningfully constraining private regulatory power.

International Human Rights Law as a Framework

Adopting human rights law as default content moderation rules can be a project of translation, meaning: take international law standards that already exist and translate them into implementable content moderation rules. For the reasons I discussed earlier, I have little faith in that project.

However, another proclaimed virtue of international human rights law is that it offers a common framework and vocabulary to guide the discussion between multiple actors on how to come up with a new language, a new rulebook specifically designed for online speech. Indeed, it may still be valuable to rely on the human rights framework not to answer all questions but to agree on what questions need to be asked (does the rule have a legitimate end? is the rule necessary to meet that end? are less intrusive measures available?). Tech companies (or anyone making the rules) can contribute to public reasoning and deliberation by being transparent about the lack of unequivocal answers. They should explain, instead, why they prefer certain rules and how they think about them through the lens of the standards set forth in Article 19 of the Covenant. That type of transparent reasoning could be the start of a dialogue with other actors in a shared language.

Such an approach resembles what Larry Lessig refers to as “latent ambiguities.” Lessig tried to imagine how judges would react to novel legal questions posed by the development of technology. In some cases, translation of already existing rules would be easy: for example, extending the protection of mail to electronic communications. In other cases, however, there is no unequivocal answer, and there is a need to decide anew how to regulate. For those situations, Lessig imagined that judges could promote democratic deliberation by identifying those “ambiguous” areas, proposing possible paths forward, and explaining how their own decisions would advance constitutional values.

There is one fundamental difference between Lessig’s work on judicial adjudication and content governance. In the case of judicial decision-making, legislatures could later contest the judges’ decisions. Lawmakers can debate and vote for a different rule. In the governance of online content, although civil society may well play a role in contesting the reasoning and choices tech firms offer, no institution has the authority equivalent to that of a legislature to move the dialogue forward. In that sense, the transparent reasoning of companies can be the beginning of a conversation, but it remains unclear who can “speak” next.

As Jonathan Zittrain argues, the current era of content moderation requires experimentation with processes and institutions that can reconstruct legitimacy and open opportunities for people’s participation in online governance. Looking at international human rights law, to the extent that it offers a common framework to enable conversations, might be a step in that direction. I have tried to begin exploring which positions that framework gives priority to and to emphasize the need for finding other actors that have the capacity to contest the public reasoning of tech companies. Only then will international law be able to foster an actual conversation rather than a monologue uttered by tech firms in the guise of human rights language.

--

--

Brenda Dvoskin
Berkman Klein Center Collection

Doctoral Candidate @ Harvard Law School | Affiliate @ Berkman Klein Center For Internet & Society