Why International Human Rights Law Cannot Replace Content Moderation

Brenda Dvoskin
Berkman Klein Center Collection
7 min readOct 9, 2019

--

CJEU’s Ruling in Glawischnig-Piesczek v. Facebook and the Conflicting Regulation of Prior Restraint in Regional Human Rights Systems

Can social media networks adopt international human rights law to govern online speech? As companies such as Facebook, Google, and Twitter have come to wield more power to set speech norms worldwide than ever, some have suggested that they apply international human rights law to regulate their own platforms. Others (including my peer Berkman Klein Affiliate evelyn douek) have responded that human rights law is too flexible to provide the level of granularity and certainty that social media need to operate. The recent Court of Justice of the European Union’s ruling in Glawischnig-Piesczek v. Facebook goes to the heart of this current debate as it exemplifies that international standards can be not only imprecise but also contradictory.

I focus here specifically on the differences between the European and the Inter-American systems to regulate prior restraint of speech and on how courts have reached contradictory decisions under the two regional human rights systems. These tensions ultimately show that there is no accepted comprehensive set of global standards that can operate as the default rules for content moderation.

One solution to get away from the contradictions that emerge between human rights standards enshrined in different bodies of law is to ask companies only to look at Article 19 of the International Covenant on Civil and Political Rights (ICCPR) and the interpretations issued by U.N. bodies such as the Human Rights Committee (as suggested by Evelyn Mary Aswad). However, David Kaye, the principal proponent of basing content moderation on human rights law, has argued that companies should draw from the jurisprudence of regional systems when global treaties are too vague for companies to apply. If companies choose to rely solely on the ICCPR, the problem of vagueness and uncertainty will only be reinforced. In any case, what the contradiction between the European and the Inter-American systems on the regulation of prior restraint shows (at least) is that Article 19 of the ICCPR is compatible with two inconsistent ways of managing users’ uploads. Human rights law itself cannot decide how to regulate prior moderation (meaning moderation before the user uploads the content) on behalf of social media networks.

The CJEU Decision

The controversy originated in 2016 when a Facebook user posted comments about Eva Glawischnig-Piesczek — at the time, the federal spokeswoman of the Austrian Green Party — describing her as a “corrupt bumpkin,” a “lousy traitor,” and a member of a “fascist party.” Glawischnig-Piesczek sued Facebook after the company decided not to take down the comments because it alleged that they were not clearly unlawful. An Austrian court ruled that the comments were defamatory. Last week, the Court of Justice of the European Union (CJEU) ordered Facebook to take down future identical or equivalent posts using “automated search tools” with worldwide effects.

The controversial substantial standard the Austrian courts used to determine the unlawfulness of the comments illustrates just how problematic it is for national authorities to order universal takedowns. Clearly in the United States, but in many other parts of the world too, accusing a public official of being corrupt, fascist, or a traitor is not only legal but likely the most highly protected kind of speech. Yes, it is reasonable for litigants to be dissatisfied with a remedy that only has domestic reach when digital speech can so easily be accessed from other jurisdictions. But the alternative seems to be to impose highly debatable local standards on global audiences.

Contradictions between the European System and the Inter-American System of Human Rights

This ruling could not have come down (or at least, it should not; you never know with courts) within the Inter-American system of human rights. This is not only because of the different substantive standards to balance the freedom of expression and the reputation of public officials, but also because the remedies the CJEU offered are explicitly forbidden in the American Convention on Human Rights. Specifically, the American Convention forbids prior restraint of speech and does not allow for the type of balancing test that is permissible in the European context.

It turns out that the remedies ordered by the CJEU are not entirely clear. The Court suggested that the company use “automated search tools and technologies” to remove or block access to equivalent or identical posts. Which particular technologies Facebook should use remains unknown. The difference between removing content and blocking access to content (the two possibilities Facebook would need to adopt to comply with the judgment) is also somewhat mysterious. The Court seems to be imagining some filtering of elements (perhaps specific words?) set by national courts to prevent users from uploading content. The Court does not explain this, so we cannot be sure.

However, if that is the case, meaning that if the Court is ordering Facebook to prevent uploads of content that no court has evaluated, then the measure is incompatible with Article 13 of the American Convention on Human Rights. The primary difference between the two regional systems of human rights is that although Article 10 of the European Convention allows for balancing tests both for prior restraint and subsequent liability, Article 13(2) of the American Convention states that the right to free expression “shall not be subject to prior censorship but shall be subject to subsequent imposition of liability.” Article 13 establishes some exceptions to that rule, but they are clearly inapplicable to this case (such as restrictions to public entertainment to which children have access).

Along those lines, the former Special Rapporteur for Freedom of Expression of the Inter-American Convention Catalina Botero has emphasized that any measure ordering the filtering or blocking of online content is an exceptional measure that must be limited to cases of unprotected speech, such as child pornography and propaganda for war and hateful speech that constitutes incitement to lawless action. These measures must be strictly defined to target only content deemed unlawful and that could never reach presumptively lawful content. According to these principles, in 2014, the Supreme Court of Argentina rejected a request from a plaintiff who wanted Google and Yahoo to filter websites that associate her name with pornographic content. The Court ordered the companies to do so for the websites that the Court itself had identified as falsely associating her name with pornographic images. However, the Court refused to order the companies to filter websites showing “equivalent” content in the future because such measure would constitute prior restraint and was therefore prohibited by the American Convention.

The CJEU did not clarify whether “identical” content could be considered lawful in certain contexts (such as content reporting on the case). Neither did it define what “equivalent” content is. To be fair, the CJEU did specify, “Differences in the wording of that equivalent content, compared with the content which has declared to be illegal, must not, in any event, be such as to require the host provider concerned to carry out an independent assessment of that content.” It would seem that the Court is envisioning a mechanism through which national courts can make determinations of what specific content Facebook would need to block. However, it is impossible for courts to specify all content that must be considered “equivalent” to calling a public official a “traitor” or “corrupt.” Inevitably, the company will need to block content that will not be manifestly unlawful before its legality has been adjudicated by a court. That is precisely what Article 13 of the American Convention does not allow.

Companies Cannot Rely on Human Rights Standards Alone

No doubt my disagreement with the substantive standard the Austrian courts used to determine the legality of the post influences my resistance to embrace an order to prevent the upload of future similar accusations against the same public official. But beyond that debate, these tensions show that international human rights law is insufficient to be the basis of content moderation. These companies can (and should) seek guidance in human rights law but still need to make choices when these laws are unclear, ambiguous or contradictory.

As much as we would like it (and as much as companies might like it as well), there is simply no global system of rules companies can merely incorporate into their content moderation guidelines. Sure, all human rights systems share some core values. But how these systems balance these values is not always identical. The regulation of prior censorship is the most relevant difference between the Inter-American and the European systems in the freedom of expression realm. The difference is even more striking in the digital age in which so much moderation of speech is carried out preemptively.

As demands for transparency and accountability grow, companies continue in their attempts to develop theoretical justifications beyond business reasons to ground their content moderation rules. For instance, once the Facebook Oversight Board begins operating next year, it might endeavor to develop a global theory of speech to guide the company’s policies. Human rights law can offer some inspiration. But it will not offer definitive answers to all questions; choices will still need to be made.

Special thanks to evelyn douek and Maia Levy Daniel for their feedback and input on this post.

--

--

Brenda Dvoskin
Berkman Klein Center Collection

Doctoral Candidate @ Harvard Law School | Affiliate @ Berkman Klein Center For Internet & Society