War on Words: Big Tech Versus Individual Speech

Holly Toschi
Zero Equals False
Published in
12 min readJul 18, 2019

47 U.S.C. Section 230, otherwise known as the Communications Decency Act (CDA), has enabled Big Tech platforms to all but evade liability in a host of lawsuits for many years. But what happens if the bubble of section 230 protection is finally pierced and Big Tech becomes subject to state scrutiny? What if these entities are legally recognized as de facto public forums operated by private companies?

Is Big Tech’s right to free speech more important than an individual’s? Should our courts continue to permit Big Tech’s monopolization of free speech and freedom of expression by continuing to narrowly interpret statutory and case laws when it comes to these companies? Or should the government attempt to regulate social media by enacting legislation that would hold these entities accountable or modify section 230 to expressly prohibit bias?

Unless American jurisprudence expands its application of current tech and First Amendment laws to meet the expanding role of search engines and social media on the media market as a whole, then this trend of viewpoint discrimination and chilling of free speech will contravene the very purpose of our Constitution and press.

At current, social media is a somewhat lawless community, a sector in which deregulation has enabled these companies to establish a monopsonic conglomerate whose governance over user expression is antithetical to democratic principles.

Those opposed to antitrust intervention rely on the elemental rationale that Big Tech should remain exempt because these platforms are made available to consumers/users for free. Simply put, no cost, no harm.

Proponents, however, recognize there is a cost to users: social media’s monetization of personal data, exploitation of user privacy rights (e.g. Cambridge Analytica scandal), and dominance over user speech and expression. Acquisition of user information is quantifiable. In fact, it’s a very valuable commodity that permits these companies to earn billions in revenue annually. Further, the lack of competition that is comparable to sites such as Google and Twitter in terms of audience reach allows Big Tech to draft Terms of Service agreements that offer little benefit to users. Moreover, these guidelines offer users little or no recourse in the event a given platform unilaterally decides to suspend an account or deplatform an account altogether or change its terms of use, often without warning.

In response to growing concerns about censorship, particularly among conservatives and alternative media, the government and President Trump are putting increased pressure on Big Tech to prove that these platforms enforce their terms of service fairly and without bias towards all users. The growing threat these sites pose to the First Amendment has roused rumblings of regulatory intervention directed at the trifecta of Big Tech — Google, Inc., Facebook, Inc. and Twitter. Yet despite repeated assurances of objectivity and fairness made by Big Tech executives Sundar Pichai, Mark Zuckerberg and Jack Dorsey, respectively, increasing content and viewpoint discrimination continues to prompt more doubt as to the motives of these sites.

Furthermore, such blatant targeting against anti-Leftist dogma makes it appear that these sites are not making a good faith effort to uniformly enforce community guidelines.

In March 2019, the Congressional Research Service published “Free Speech and the Regulation of Social Media Content.” The paper provides extensive insights and exhaustive research concerning the prospective legal limitations of applying First Amendment laws to Big Tech.

Perhaps the two biggest challenges in attempting to prosecute these platforms for directed content and viewpoint discrimination are proving that such activities violate both the provisions and spirit of section 230, and in turn trigger First Amendment laws. But the latter is a double-edged sword and its blade has often cut in favor of Big Tech. The courts have routinely held that these digital platforms have a right to free speech. Therefore, even if federal laws were put in place to regulate internet content, under the First Amendment, Google and Twitter et al. can argue (and have argued) that the methods in which these companies utilize (i.e. algorithms) to collate search engine results and monitor user content, respectively, is analogous to editorial decision-making. Therefore, even if these programs merely organize third-party content, it qualifies as “speech.”

A series of lawsuits have been filed against search engine Google and social media Goliath Facebook based on purported First Amendment violations. These companies have typically prevailed; however, the majority of these rulings were based primarily on section 230 immunity, not the merits of the Plaintiffs’ First Amendment claims.

The prefatory criteria in determining whether the First Amendment applies is first assessing if an ISP creates or develops content or serves as a neutral conduit for third-party posts. But a critical element of section 230 of the CDA is determining whether a service provider has violated subsection( c)(2), which “immunizes only an interactive computer service’s “actions taken in good faith.” If the publisher’s motives are irrelevant and always immunized by ( c)(1), then ( c)(2) is unnecessary.”

Courts typically rely on section 230(c)(1) to dismiss a lawsuit but in certain instances have warranted addressing the facts alleged considering ( c)(2), primarily based on whether a Plaintiff was able to show that Google, for instance, acted in bad faith in one of the following ways: falsely alleging a user violated Google’s policies (Darnaa, LLC, 2016 U.S. Dist. LEXIS 152126); selectively enforced its policies (Spy Labs LLC, 2016 U.S. Dist. LEXIS 143530); or enacted a policy that was “entirely pretextual” (Id.)

Overall, section 230(c)(2) has been invoked when a website has restricted or removed content such as in the following cases: when Google removed an app from the Google Play Store (Smith v. Trusted Universal Standards in Elec. Transmissions, Inc., 2011 U.S. Dist. LEXIS 26757), and removed select websites from its search results (e-ventures Worldwide, LLC, 2017 U.S. Dist. LEXIS 88650), or when YouTube removed a video from its platform (Darnaa, LLC at *25.)

In circumstances where a lawsuit is not dismissed under prerequisite section 230, then the question of whether a platform’s actions will be examined under the First Amendment follows.

Several scholars contend that the dissemination of information and collation of search engine results on web-based platforms are indeed a form of speech and therefore entitled to protection under the First Amendment. To date, the courts have tended to agree with this logic. However, it is important to note that existing precedent is mostly based on cases that have challenged the legalities of Google’s alleged manipulation of search engine results.

For example, in Search King, Inc. v. Google Tech, Inc. the Court found that Google’s search engine optimization program PageRank produced “constitutionally protected opinions.” (Id at 7.) In the instant case, it focused on the distinction between “process” and “result” (Id at 6.) Although the process of arranging search results via the PageRank algorithm may be objective, the Court was asked to focus on the subjective results, or the actual PageRank, which it found to be “fundamentally subjectively in nature.” (Id.) (Emphasis added.) These subjective results constituted individual opinions and thus are a form of protected speech.

Then in Langdon v. Google, Inc., Plaintiff similarly claimed Google had manipulated its search results. The Court again favored Google. In contrast to Search King, the Court’s dismissal was not only based on section 230 immunity; rather, it reasoned that the First Amendment guaranteed Google both the right to speak…and not to speak. Google’s role as a publisher enabled it to discern “whether to publish, withdraw, postpone, or alter content.” (Id. See Zeran v. America Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997)). Thus, in Langdon, the court compared Google’s right to refuse carrying Plaintiff’s advertisements with the prohibited act of requiring schoolchildren to salute the flag (See W. Va. State Bd. of Educ. v. Barnette, 319 U.S. 624 (1943)).

The foregoing cases provide rationale as to the reasons why the courts analogize Google’s mechanisms of arranging search engine content via computerized algorithms being similar to when newspaper editors determine what information to print (or not to print.) But a rudimentary issue concerning ISPs and the courts is the fact that many of the relevant cases are dated; thus, the application of law does not reflect the impact these sites have on our daily lives in the here and now.

But a critical consideration is whether in those cases in which courts have determined that a certain type of scrutiny was unwarranted or inapplicable to the Internet medium would be addressed similarly in today’s climate. Given the formidable presence of social media in current times, the courts might have to reconsider how the rule of law should be applied under similar circumstances.

Without question, access to media content has transitioned from the use of traditional sources such as The New York Times to using Big Tech platforms. Twitter, for example, is arguably the most popular source for news globally. The reigning legacy media such as The Times and Washington Post, even broadcast and cable news, have been dethroned by Internet-based sites that provide users with a vast amount of material from myriad sources in a matter of seconds. The availability of search engines have also simplified retrieval of archived material. Further, Twitter hosts an extensive number of dissident media and citizen journalists, sources which prior to the establishment of Big Tech platforms were far less and/or difficult to access.

Hence, Twitter’s undeniable popularity as a globally recognized conduit for the exchange of news and information, particularly government-based, has led lower courts to consider its role as a public forum upon examining potential First Amendment violations.

Notwithstanding a lack of available case law, scholars have weighed in on the topic of what constitutes social media “speech.” Incumbent upon analogizing various First Amendment cases, they have developed a variety of scenarios in which social media platforms could be subject to state scrutiny, including the legally recognized concept known as “company towns.” Premised on the Supreme Court case Marsh v. Alabama, the government would be required to weigh the property rights of an owner to engage in protected speech versus those expressive rights of those of its users. Simply put, the courts would be required to examine the expression rights belonging to private entities such as Twitter, Google, etc. against those who use this “private property” to circulate content.

The Marsh scenario takes into consideration the role web platforms have with its user public. Twitter has repeatedly stated that its mission is to facilitate free speech to its users. Because its primary function is to act as a digital public square both in theory and design, the factors the Court considered in Marsh would apply. Similar to how citizen Marsh was allowed to exercise her Constitutional right to distribute religious material on a privately owned block in the center of town without a permit, all social media users should be allowed to circulate content and viewpoints via social media (exclusive of an incitement to violence or a threat of physical harm to another) without the threat of censorship. While this reasoning is overly simplified, it is illustrative of a potentially viable solution to help counter the increased incidents of platform bias and suppression of user content on social media.

If the government assumes regulatory authority over social media, however, such intervention poses a greater quandary — how would the courts balance free speech rights?

On a fundamental level, though, what qualifies as “speech”? Should algorithms designed to collate and organize search results of third-party content employed by platforms such as Google, Inc. be recognized as Constitutionally protected “speech”? Is publication of another source’s content in that of itself comparable to an individual viewpoint? Are proprietary programs designed to suppress certain user content (i.e. “hateful” content that violates Twitter’s Community Guidelines) entitled to equal protection as the content in a tweet?

Regarding Google, First Amendment scholar Eugene Volokh contends that the role of social media is to host and collate user content, even modify it, and that such functions, whether human or computerized, constitute speech. Similarly, First Amendment scholar Eric Goldman opines that such decisions regarding publication are protected by the First Amendment. However, researcher/author Tim Wu’s perception deviates from the former, offering the counterargument that the mere act of indexing search results does not qualify as protected speech, further noting that the content itself must belong to the speaker in order to trigger First Amendment protection.

Wu’s position essentially challenges existing case law wherein the courts have concluded that Google’s computerized programs represent editorial decision-making and thus are a legally recognized form of speech. In Search King, Google prevailed because the Court determined that every search result its proprietary technology yielded was regarded as a separate opinion.

In Langdon, just as newspaper publishers must decide what news to publish or not to publish, Google’s machinery requires human input in its design. The essence of this machinery is based on human judgment. Hence, Google is acting as a publisher every time an algorithmic “rule” filters content and publishes the same, regardless of its origin.

If we agree with Wu’s reasoning, imagine the courts having to distinguish whether original content or third-party content is protected speech or if Google or Twitter as state-operated private platforms have First Amendment rights. Under Marsh, private-turned-public-entities Google and Twitter may no longer qualify for First Amendment protection. Additionally, as I mentioned in my last blog, there is no hate speech exception to the First Amendment. However, social media is determined to create a new digital vernacular wherein those terms these executives associate with “hateful” content must be eliminated. Notwithstanding the lack of such a distinction in the Constitution, government oversight over these platforms might just change this fact. Judges misapply the rule of law regularly. Specifically, the Supreme Court has reversed almost 79% of Ninth Circuit decisions between 2010–2015.

Could these entities in turn successfully assert a right to protected speech against the federal government? The Supreme Court has argued for years that the government does not have First Amendment rights. Such opinion begs the question as to how the courts would justify a sudden 180-degree reversal of course in comparison to existing case precedent in Big Tech lawsuits.

Talk about irony — invoking the public forum argument under Marsh as a means of protecting user speech against bias and censorship would consequently open the proverbial door for a digital platform to assert similar First Amendment defenses. This would literally enact a digital war on words.

Moreover, supporting the creation of legislation to regulate social media entities accordingly means inviting the government to become even more intertwined with our Constitutional rights. The Constitution is intended to serve as a framework prohibiting the government from interfering with and infringing upon certain inalienable rights as communicated in the following text:

“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances.”

Granting the government jurisdiction over Internet speech is another form of oversight that might pose more harm than good. In reality, this increased governance might further chill speech. To assume that such management would lead to less partisan bias and better guarantee protections for users would be naive. Human nature is inherently biased. Computer algorithms are created and designed by humans; human judgment is a primary reason why the courts have regarded Google’s search engine results as analogous to editorial opinions. Thus, despite Google’s and Twitter’s rhetorical proclamations to the public and the government defending their respective objectivity, even the courts recognize that an element of subjectivity is present within the ISP domain.

The express purpose of Internet-based speech platforms is to facilitate the dissemination of individual thoughts and ideas with greater ease. Where accessibility to news content was previously constrained by the amount of available space in a given newspaper or written publication, web-based communications is almost without bounds. The respective bandwidths of Google and Twitter are vast; however, this increased space poses challenges in terms of devising mechanisms to enforce free expression to all users.

One way to address such censorship issues within social media is to seek relief from the courts by challenging the validity of the words these companies utter and pursuing breach of contract and unfair competition claims. Contentions of “fairness” and “objectivity” and “free speech for all” have clear, indisputable meanings that are impossible to misconstrue. When Google and Twitter censor content and deplatform users and alter terms of service, they are breaching user agreements and acting in bad faith.

If Big Tech platforms can successfully argue that they deserve First Amendment protections, then the user public needs to challenge the courts to address the truthfulness of such speech. But litigation is expensive. Arguably the most persuasive remedy is modifying section 230 to lift the veil of immunity in instances of Big Tech censorship. But again, enactment and enforcement are wholly separate challenges.

So who should be the arbiter of digital media content: The government? The courts? Private companies? Individual users?

Whatever the “fix”, it may clean the wound but not altogether heal it. As long as Big Tech continues to be dominated by biased Leftists, and our judicial system and government continue to focus on partisanship and capitulate to Big Tech’s promotion of a predominately Liberal narrative, then the digital medium will continue to ceaselessly censor conservative users.

Pending a resolution, Big Tech and individuals will remain stationed on a proverbial battlefield known as the First Amendment and fighting a war on words.

--

--

Holly Toschi
Zero Equals False

Attorney Wrangler/Civil Write-Her/Photographer/(The) Zodiologist. Dogs, music/vinyl, bourbon, the First Amendment, travel, books, law, tattoos, ocean.