Are the new platform governors the new monarchs?

Valentina Vivallo Toro
Tech Legality
Published in
5 min readJan 9, 2024

This blog is the third in a series by the @Tech Legality reading group, which follows the Stanford Course on Ethics, Technology, and Public Policy for Practitioners through a human rights lens.

In the fourth session, we discussed Platforms, the Public Sphere and the Private Sphere. We went through four main readings that set the debate about freedom of expression in the online realm.

In this blog, I will focus on a Harvard Law Review Article, “The New Governors: The People, Rules, and Processes Governing Online Speech”, written by Kate Klonick, which analyses the increasingly essential role of private online platforms in free speech and democratic participation. Specifically, it examines three main platforms — YouTube, Facebook and Twitter — through interviews with employees in charge of curating content. The discussion centres on why platforms moderate content, even though in theory they do not have to due to the liability exemption.

Are the new platform governors the new monarchs?

Klonick asserts that “platforms are the New Governors of online speech, which are private self-regulating entities that are economically and normatively motivated to reflect the democratic culture and free speech expectations of their users”.

This premise shows the great power and impact that private platforms have on free speech and democratic participation online when deciding what content to curate. In order to preserve democratic values, it is crucial to understand why and how platforms use algorithms for content moderation.

Decisions about what material should be curated, who decides to curate content among platform employees, which values are considered to rule this digital space, and why and how platforms moderate content are essential questions to understand the magnitude of platforms’ power.

It is worth noting that online platforms are granted immunity for the content posted on them under Section 230 of the Communications Decency Act in the US (and although this reading predates the DSA, platforms now also have similar protections under Articles 4 and 6 of the DSA in the EU). Given this immunity, the author asks why online platforms bother moderating the content posted on their platforms. She stressed three reasons:

  1. The influence of American Constitutional freedom of speech reflects the idea of protecting user speech from government requests for user information and user takedown, as well as collateral censorship. Klonick also identifies two relevant issues when protecting users’ free speech. First, discussing what is offensive depends on cultural contexts, so defining it takes work. Second, protecting freedom of expression from all content, even content that incites violence against a particular group, can put them at risk of suffering harm to life and personal integrity.
  2. The corporate responsibility of platforms is to minimise obscenity, violence and hate speech online and to actively remove offensive content and protect users’ rights by avoiding free speech problems of collateral censorship.
  3. The economic necessity to meet users’ expectations. On the one hand, curating too much content can affect interaction, and thus, users’ trust in the platforms could be reduced, which can affect revenues. On the other hand, not moderating (keeping all the content) may also upset users and cause fewer page views, also impacting revenues. Therefore, there is a need to balance competing values of user safety, user harm, public relations concerns for platforms, and the revenue implications of certain content for advertisers.

As can be seen, online platforms are making important decisions that impact human rights and democracy. Therefore, the public and private spheres intertwine. So, should we leave online platforms to set the rules, play the game and win it? What interests would they defend if they didn’t need to implement content moderation to protect their revenues?

It is true that online platforms have made enormous amounts of information and knowledge available to people. However, the same companies also face concerns about their lack of transparency in their content moderation practices. They could favour some users over others, such as setting special rules for public figures, which could jeopardise democracy, equal access and fair participation in online speech.

Additionally, these platforms often lack a transparent appeal system for individual users, leaving users essentially powerless. While platforms need users’ views, they are disincentivised to allow anti-normative content, so they prefer normative content to avoid revenue issues, although there is some tension in this regard because sensationalist content which is more likely to constitute disinformation tends to spread much faster. Moreover, platforms are incentivised to create perfect filters that show users only content that matches their preferences, even where this content is sensationalised, which can lead to non-deliberative polarisation and damage democracy, known as the echo-chamber effect. It can be argued that online platforms have an indirect accountability system in place, as they rely on user preferences to a certain extent. However, when it comes to the rapid spread of disinformation, this system may not be effective in identifying those responsible for it or in combatting the spread where such content affirms users’ own views and biases. Klonick claims that these platforms are too slow in addressing such issues.

Users depend on these private platforms to exercise their rights to access information, expression, civic participation, and so on, while platforms depend on users indirectly through advertising views. This imbalanced power relationship needs to be addressed by governments and civil society. In the democratic legal order, we agreed on the separation of powers developed by Montesquieu, so why not apply the same rationale to address the concentration of power of online platforms to establish a system of checks and balances?

Are the present governors not akin to the new monarchs in their apparent authority?

During our reading group discussion, we also examined the impact of cultural contexts on content moderation. Specifically, we raised questions about whether online platforms should follow the same rules when moderating offensive content, regardless of cultural context. Alternatively, should they consider the context to determine which content needs to be curated?

Additionally, we explored who should be considered by digital platforms in understanding a cultural context when moderating a particular community. Should civil society be involved, or do governmental authorities alone have the responsibility to inform a specific platform about the relevant aspects of their nation’s culture?

What could be deemed harmful will depend on many factors, so more than a binary response is required. Digital platforms need to involve all stakeholders in making decisions that impact fundamental rights, rather than leaving it solely to content moderation teams. Significant participation from all parties concerned is essential.

#EthicsTechPolicy #ResponsibleAI #EthicsandTech #HumanRightsandTech @Tech Legality

Valentina Vivallo Toro

--

--

Valentina Vivallo Toro
Tech Legality

I'm a Human Rights lawyer, Policy Analyst, Consultant and Researcher interested in Digital Technologies and their impact on human rights.