Free Speech is Circular

Trump, Twitter, and the Public Interest

Elettra Bietti
Berkman Klein Center Collection
13 min readJun 1, 2020

--

Andrei Lacatusu, “Social Decay” (2017)

In a recent paper, Jack Balkin argued that free speech is a triangle. While the old model of free speech was dualist and entailed two kinds of actors, governments on the one hand and speakers on the other, today’s speech for Balkin must be conceived as a triangular model that involves (1) governments, (2) privately-owned infrastructures, including social media companies, search engines, broadband providers, and (3) speakers.

A series of events in the past week, particularly Twitter’s treatment of US President Donald Trump’s inflammatory tweets on elections and the Minnesota protests, have crystallized enduring and heated debates around online free speech, content moderation and the role of platforms in enabling and moderating the spread of harmful speech by politicians. Looking closely at the stakes of the debate, online speech is more than a triangle. The discourse around online speech forms an insoluble circle that needs to be broken.

I describe some confusing dualisms that pervade current debates, I then explain the stakes and suggest how to move beyond circularity. As I argue, the task is not to identify bad actors and good actors, to limit or enhance their ability to engage in speech or regulate it, but it is instead to limit political and other communications’ reliance on profit-motivated infrastructures that channel speech in ways intended to maximize user-engagement, addiction, behavioral targeting, and polarization.

A Quick Timeline of Events

On May 26, 2020, in accordance with their expansion of efforts against misleading information, Twitter added a “get the facts” link to a false Trump tweet about mail-in voting. The link directs users to a “moment” which refutes Trump’s claim.

In response, Trump tweeted a number of reactions in which he accused Twitter of “stifling FREE SPEECH” and of censoring speech in advance of the election. On May 28, Trump then signed an Executive Order entitled “Preventing Online Censorship,” which limits the application of section 230 of the Communications Decency Act. As explained in Eric Goldman’s excellent deep dive here, the Order is largely an act of political theater and intimidation.

On May 29, Twitter placed a public interest notice on a Trump tweet in relation to the Minnesota protests that incites to violence saying that “when the looting starts, the shooting starts.” The public interest notice reads: “This Tweet violated the Twitter Rules about glorifying violence. However, Twitter has determined that it may be in the public’s interest for the Tweet to remain accessible.” Twitter applies a public interest notice to tweets that violate Twitter’s policies but that the public has an interest in seeing (they are from government officials’ accounts that have more than 100,000 followers and are verified). Twitter also placed the same notice on a tweet with the same wording by the White House Twitter account.

Dualisms and Confusions

Good Cop, Bad Cop

The debate on politicians’ speech, and on harmful speech generally, is often portrayed simplistically as a question of government versus platform censorship. The question is posed as one of good cop vs. bad cop. Whose censorship is more legitimate or less harmful? The underlying, and the misleading, idea is that if we can identify the better actor in the ecosystem and delegate all forms of speech regulation to that actor as the most and only trustworthy party, then online speech will be “solved”.

Narratives play out in two opposing directions. For many, Twitter’s behavior — the fact-checking label and the public interest notice, the effort to explain decisions and act in line with the “public interest,” — render Twitter the ultimate good actor, and possibly a hero we should entrust our online speech to. Trump’s efforts to politicize Twitter’s fact checking, his mischaracterization of the right to free speech, and his disturbing and deliberately distracting Executive Order, make him the bad guy who is attempting to destabilize democracy and win elections at the cost of sacrificing Americans’ lives. For others, the story unfolds in the opposite direction, with Twitter featuring as the bad cop trying to censor sacred political speech, and Trump acting as the ultimate good guy, who has the power to finally hold Big Tech to account.

None of these Hollywood-like narratives is accurate, of course. Speech regulation cannot be left exclusively to either of these two characters, because neither of them is — or can be — truly and fully aligned with the public interest, let alone trustworthy.

To Regulate or Not To Regulate

The debate also is confused by portrayals of regulation as a switch: you can turn regulation on or off so that it is either an intrusive panacea or is completely absent. Here the narratives are varied but are all more or less consciously shaped by a libertarian understanding of free speech, one that sees freedom and government intervention as antithetic.

The idea of unhindered free speech and its corollary understanding of regulation as “censorship” both derive from a long libertarian tradition in US free speech jurisprudence. In a dissent in Abrams v. United States in 1919, Justice Oliver Wendell Holmes first coined the idea of speech as a tradable commodity to be exchanged on a free and horizontal “marketplace of ideas.” This vision of speech has progressively been incorporated into US First Amendment discourse and has fed into the idea that any form of government interference with speech is objectionable “censorship” that hinders a person’s freedom to trade ideas without constraints. This vision is not the sole or necessarily the predominant understanding of free speech in America, but it is significant enough to have shaped much of today’s debate around platform regulation and content moderation.

Paradoxically, as an institution almost exclusively concerned with the protection of private speech from government interferences, US free speech law has very little to say on what Trump has been calling Twitter’s “censorship” of his expression. Twitter’s nudges, its fact-checking flags or public interest notices, are in fact all protected under the First Amendment as private speech that the government, by and large, cannot prevent. On the other hand, Trump’s executive order attempting to reduce Twitter’s ability to moderate speech possibly violates the First Amendment.

Framing the narrative around speech regulation as a switch is misleading in at least three ways. First, speaking of “censorship” in relation to private content moderation is a performative stance. All these acts of moderation are extremely marginal, are themselves protected speech, and have little to do with paradigmatic cases of censorship or suppression of dissent.

Second, the debate on online speech is confused by the fact that government regulation is subject to First Amendment law while private platform “moderation” is largely governed by section 230 of the Communications Decency Act. The First Amendment applies to platforms to immunize them from scrutiny, not to give them direct responsibility as potential censors. If anything, therefore, the problem is that the combined effect of First Amendment law and section 230 largely leaves Twitter free — or you could say uncensored — to do anything it wants with content circulating on the platform, and that there are no legal restrictions or penalties for its failures to protect users on the platform.

Third, and importantly, even the most well-intentioned of First Amendment and media scholars often understand speech governance as a libertarian-infused cocktail whose aim is to limit interferences with the otherwise free interactions between private entities. Interferences in the form of laws, but also design interventions or other behavioral interferences, are thus seen with suspicion as potential threats.

This switch mentality on regulation, the idea that either there is regulation or there is not, hides the fact that speech is always structured by legal and other forces that affect the way people communicate even in the absence of direct visible governmental or platform intervention. An important example of this is the way algorithms operate by modifying and adjusting the content users can access, or the way political parties influence voters through political microtargeting.

There is consequently no such thing as “un-regulated” free speech. Trump’s speech is the subject of lots of structuring and shaping factors, Twitter’s nudges are not the only way in which Trump’s expression is mediated, transformed, or if you prefer “censored.”

Clarifying the Stakes

Ideas, Speech and Platform Power

The marketplace of ideas such as it is conceived is not unregulated, it is not apolitical neither is it a utopian trading space devoid of power dynamics. Laws shape much of modern life, they structure the way we live and form social ties, the way we are brought up and receive an education, the way we work, thus the way we form opinions and express ourselves. Technology platforms shape the way we interact and communicate, the way we access content and maintain friendships, the way we read the news and engage with politics, produce, and consume content.

The fact that a piece of online speech is taken down, whether a law bans it or whether a platform down-ranks it to make it less visible, is less important than the idea that online and offline speech is always shaped through various pre-existing and evolving forces. Unless these structuring factors are brought to light and openly taken into account, our understanding of speech remains an empty and meaningless hologram.

Online platforms also are not neutral, or apolitical, or passive.

Online platforms also are not neutral, or apolitical, or passive. Online platforms look like open and welcoming portals for unrestrained sharing, but in fact they are far from inactive and benefit from the content and data of their users to target advertisements and make a profit. Users on these platforms are nodes on a network that is not flat or fully horizontal, it is a network of asymmetrical nodes. These nodes (or actors) are treated by platforms very differently depending on how many friends or followers they have (how many other nodes they are connected to), whether they are politicians, how much traction or click-worthiness their content has, how many eyeballs they can attract so that ads can be viewed and generate a profit for platform owners.

Ultimately, speech is about how power is channeled through existing laws, technological infrastructures, and forms of social life to empower or disempower, connect, or disconnect persons. Being given an equal right to speak does not amount to being able to speak or be heard equally. The voices that are heard are the voices of the loudest actors, those whose confidence is highest, whose rhetoric is most simplistic, whose position in society is most entrenched. The loudness of Trump, however, should not prevent us from seeing the equal if not greater powerful, loud megaphone that Twitter has at its disposal. Neither Trump nor Twitter’s voices are in jeopardy. As shown by Zeynep Tufekci, both Twitter and Trump (and Facebook) have an interest in cooperating to maintain a relatively addictive, polarized, and polluted information ecosystem that harms citizens and particularly vulnerable persons.

The real battle, therefore, is not about which of them is the good cop or the bad cop, or which of them should be regulated or censored, the real battle is how to ensure that the speech and fundamental rights of the least powerful and the least loud in society are protected and not curtailed and abused by the concerted efforts of online platforms and vote-hungry politicians. The question, therefore, is not how to secure the right to speak freely without interferences, but rather how to ensure that the combination of pre-existing and novel interferences do not disproportionately favor the powerful actors at the expense of the powerless.

Content Moderation and Kaleidoscopic Transparency

The tools that online platforms such as Twitter, Facebook or Google’s YouTube have at their disposal to constrain, moderate and regulate online speech are numerous and diverse. The Facebook Oversight Board has captured legal and media scholars’ imagination for some time, being portrayed as “one of the most ambitious constitution-making projects of the modern era.” In practice, it is a self-regulatory effort with limited practical relevance. In spite of the promise it holds, it shields Facebook from other forms of accountability over moderation and algorithmic governance of online communications and entrenches unhindered platform discretion and unaccountability over aspects of speech over which the Board has no competence: disinformation, algorithmic governance, WhatsApp to mention but three.

Twitter’s two interventions on Trump’s tweets that are discussed in this piece are both peripheral cosmetic interventions. They might both be characterized as “nudges.” Nudges have been defined as: “any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives. To count as a mere nudge, the intervention must be easy and cheap to avoid. Nudges are not mandates.” A fact-checking link or a public interest notice added to a tweet are informational interventions that modify behavior without mandating it: they do not prevent people from accessing a tweet but simply discourage interactions with it. Removing the ability to retweet Trump’s tweet also goes in the same direction. Twitter included an ability to retweet with comment. All these interventions are ways to shape the visibility and spread-ability of objectionable political speech without facially appearing to interfere with it.

Some important distinctions should be made between Twitter’s two interventions: adding a fact-checking link and adding an obscuring public notice. Adding a fact-checking label is prone to politicization because it can easily and convincingly be contested and portrayed as the biased intervention of a false “arbiter of truth” whose views are biased. Instead, making a tweet less visible by adding a public interest notice, reducing its spread-ability and hiding its engagement metrics (retweets, likes), is a less easily politiciz-able intervention. It is more effective in that it aims at reducing the impact of a harmful tweet without entirely preventing the public’s ability to access it.

The advantages of Twitter’s second intervention, the obscuring public interest notice, are that such intervention is more opaque and therefore possibly less transparent. Twitter has the ability to act behind the scenes to single-handedly de-prioritize certain content or show it less compared to other content and there are currently no mechanisms for the public to know how it has acted.

These interventions seem efficacious against Trump and other powerful actors who can contest Twitter’s interventions, but they can be harmful if they interfere with a serious discussion on the need for further transparency and regulation of platforms’ algorithmic business models. While platform transparency has been central to privacy advocacy for some time, addressing it has kaleidoscopic effects, with different actors wanting to know and hide different aspects of how platforms operate.

Few powerful political or commercial actors have an interest in ensuring meaningful transparency that aligns with the public interest. Trump, for instance, has an interest in holding tech companies to account so as to better use their affordances to microtarget potential voters in his political campaign. A couple of bills limiting microtargeting are now in the pipelines, but more is needed. Further, the discussion may be in jeopardy in light of discussions on the need to increase surveillance to fight the spread of COVID-19.

Thus, even if the public interest notice deserves praise, it remains a cosmetic intervention that fails to address the larger stakes of platform speech. Twitter remains free to continue engaging in opaque algorithmic content ranking and spreading, while benefiting from the publicity that a few instances of Good Samaritan behavior can give it. Trump remains free to post any inflammatory content he wants, knowing that those who want to see it can access it. His Executive Order does not solve anything and creates new problems.

Instead of asking whether or not Twitter did the right thing with Trump’s tweets, we should think of structural regulation that goes beyond mere cosmetic voluntary interventions of platforms and that instead tackles the profit motives and capitalist infrastructure that foundationally shape online speech today.

Beyond Free Speech Circularity

The debate on online speech, when understood as a question about the tensions between Twitter and Trump’s respective behaviors, is circular. Failing to take into account the important legal, technological, social, and behavioral forces that structure online speech, particularly the question of platform power and the question of algorithmic opacity, means failing to adopt a meaningful understanding of what speech consists of in the twenty-first century. Through his configuration of speech as a triangle, Jack Balkin recognizes the plurality and multi-angularity of online speech but maintains an atomistic and goal-directed perspective on how speech can be addressed. For him, platforms, governments and individuals are opposing atomistic forces that exert force or pressure against one another.

Platforms, governments, and individuals act in ways that are often aligned and create a polluted, harmful, and polarized information ecosystem.

In reality, however, platforms, governments, and individuals act in ways that are often aligned and create a polluted, harmful, and polarized information ecosystem. Dualist and triangular understandings of speech are circular because they tend to center blame and legal attention on how each of these groups of actors reacts to and constrains the others, obscuring the overall context in which these different actors act in concert to shape communications. These views thus circularly contribute to an understanding of online speech as being “unsolvable.”

Instead, we should start by recognizing that speech is political and socially constructed and that free speech is not about being able to speak in a legal, institutional and technological vacuum but about being empowered to speak and communicate through meaningful legal, institutional and technological constraints as an equal that is worthy of respect and is given a real voice in the democratic process, cultural and social life. This requires acknowledging the structuring role of platform and political power in shaping and channeling speech and requires scrutinizing both Trump’s and Twitter’s power head on and holding it to account as a larger co-extensive conglomerate of power.

Second, the task is not to enable Twitter to nudge users as much as it wants, nor is it to regulate Twitter so that it can act as a “neutral” marketplace of ideas that can prioritize the voices of the loudest speakers. The task instead is to limit political communications’ reliance on profit-motivated infrastructures that channel speech in ways intended to maximize user-engagement, addiction, behavioral targeting, and polarization. How to do this is an important question that Jennifer Cobbe and I have started addressing in this piece. The task is to imagine a different platform ecosystem that does not assume privatized control as the default but instead envisages and enables the possible reconfiguration of speech and communication as a public service that is not primarily enabled, driven and shaped by profit considerations.

This will require treating some platforms as utilities, common carriers, or essential facilities, it will require re-envisioning infrastructure ownership and control so that more power and control over data and content is distributed to persons. It may also require diversification and the co-existence of a plural ecosystem of platforms of varying shapes, sizes, regional and topical relevance. It will require further democratizing platform governance, creating public charters, and ensuring accountability to users. Finally, it will require re-imagining the platform ecosystem as a space that is not designed to maximize profits, but that instead is designed to enhance human interaction, social and cultural fulfillment, and political empowerment.

If it were not for portrayals of free speech as a zone of non-interference or as a network of atomistic poles, and if it weren’t for the mischaracterization of regulation as a switch, re-envisioning the platform ecosystem in a richer and more capacious manner would not appear so daunting. As Trump and other leaders around the world strategically weaponize digital infrastructures in preparation for elections and other political campaigns, a broader and richer perspective on online speech has become urgent.

--

--