Section 230: Mend It, Don’t End It

The First Amendment Provides a Path to Peace for Social Media

David Sacks
Craft Ventures

--

Once again, the CEOs of Twitter, Facebook, and Google found themselves in the hot seat on Capitol Hill last week, grilled by the Senate Commerce Committee. As I predicted in my last post “RIP Section 230,” Twitter and Facebook’s ham-handed censorship of the NY Post has cemented a bipartisan consensus that these tech behemoths are too large and powerful, pose a threat to democracy and free speech, and need to be reined in for the good of America. To that end, a growing chorus of legislators on both sides want to repeal or gut Section 230, the law that governs speech on tech platforms.

Repealing Section 230 would have far-reaching consequences, not just for Big Tech but for numerous small innovators and future startups — startups that may never exist without the liability shield that 230 provides. Before we throw the baby out with the bathwater, we should understand why the law has been so important to the development of the open internet and why its repeal would likely backfire by leading to more censorship, not less.

In lieu of repeal, lawmakers should fix Section 230 by bringing it into conformity with First Amendment principles. It’s silly for Twitter and Facebook to be improvising content moderation policies that please nobody when a 230-year-old body of case law already exists for the governing of questionable speech. Relying on a venerable external standard like the First Amendment would get Twitter and Facebook out of the hot seat, not just on this election eve but in future political cycles. For that reason alone, Dorsey and Zuckerburg should welcome this proposal. By embracing First Amendment obligations, they would lose little and gain a lot.

The Law That Made the Internet

Section 230 is a surprisingly concise provision tucked into the 1996 Communications Decency Act, a law that sought primarily to limit obscenity over the internet. Elegant in its simplicity, the power of Section 230 lies in two brief provisions:

1) The first provision, 230(C)(1), declared that internet companies who host user-generated content (UGC) would be treated as distributors and not publishers. This is important because they could not then be sued over the content found on their platforms. To translate this distinction to a real world context, a magazine is a publisher whereas the newsstand on which the magazine is sold is a distributor. If a magazine contains content that is obscene or defamatory, the publisher and not the newsstand faces liability. Newsstands that present a wide variety of content options wouldn’t last long without this protection, and likely neither would UGC sites. In addition to user content on Twitter and Facebook, Section 230 has enabled videos uploaded to YouTube, blogging platforms like Medium, comments posted on Reddit and boards all over the internet, and customer reviews on Yelp and Amazon. Even services like Gmail would be imperiled if they became liable for any email sent over their servers.

2) The second provision, 230(C)(2), stipulated that these content distributors would also not be subject to liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

The intention of this second provision was to provide “Good Samaritan” protection for internet services to limit obscene material, and it illustrates how the road to hell is paved with good intentions. Remember, this was written in 1996. The moral panic of the time centered around internet porn and, to a lesser extent, violent video games and films like Natural Born Killers. Worried that the nascent internet was becoming a cesspool, lawmakers gave internet services the power to limit objectionable material without losing their status as distributors. Without such a safe harbor, lawmakers feared, online services might be incentivized to avoid making any editorial judgements at all, even to remove obscenity. But in the process, lawmakers inadvertently handed to future Facebooks and Twitters the power to censor anything they wanted (in the overly broad phrase “otherwise objectionable”) even if (quite remarkably) the speech was protected by the First Amendment.

A quarter-century later, we can see the results of these provisions. The first provision of Section 230 proved visionary, enabling a plethora of valuable consumer services and making possible the open internet we all take for granted. The second provision, if its goal was to remove smut from the internet, was at best a limited success, and created the censorship debate we face today. Big Tech companies simultaneously wield enormous power over the distribution of media while maintaining wide discretion to suppress it — which is to say that they can have their cake and eat it too.

Repeal Will Backfire

This issue came to a head when Twitter used its power to censor the NY Post’s Hunter Biden story, provoking Republican legislators to respond by threatening to repeal Section 230. As Senator Tom Cotton warned, “if you want to act like publishers, we will treat you like publishers.” But subjecting Twitter and Facebook to the constant threat of publisher lawsuits will not make them embrace unfettered speech on their platforms. Instead, they will vet user content even more closely for potential liability. Anything deemed the slightest bit objectionable will be screened out or blocked.

Moreover, if these companies are deemed to be publishers, they will have every right to flex their editorial power. For example, National Review doesn’t have to run a piece by Alexandria Ocasio-Cortez advocating for the Green New Deal, nor does The Nation have to run a piece by the Trump administration on the need to build a wall. Why? Because they are publishers, not distributors. If Facebook and Twitter are forced to become publishers, they will be free to indulge their biases like any editorial page. The result will be more, not less, viewpoint discrimination.

An Old Solution to Our New Problem

The better approach would be to clean up the second provision of 230 by removing the phrase “otherwise objectionable,” and by altering that final clause that presently reads “whether or not such material is constitutionally protected.” This is the language that hands all the censorship power to the tech platforms — power that by this point, even they probably wish they didn’t have. Instead, let the final clause read, “as long as such material is not constitutionally protected.” This vests the power to regulate speech with an independent authority that enjoys a broad bipartisan consensus and has been carefully honed over two centuries of Supreme Court case law. Yes, I’m talking about the First Amendment.

People know the First Amendment protects a broad freedom of speech, but they might be surprised to learn that it doesn’t mean “anything goes.” Indeed the Supreme Court has carved out at least nine exceptions where speech can be regulated, and several of them happen to dovetail with speech that causes social media companies their worst headaches. The most relevant exceptions to protected speech are: fighting words, incitement, false statements of fact (including defamation and fraud), illegally obtained material, and obscenity. Let’s look at each of these because in sum, they provide a fairly robust content moderation policy for social media:

  • Fighting Words. In the 1942 case Chaplinsky v. New Hampshire, the Supreme Court held that speech is unprotected if it constitutes “fighting words,” which are defined as speech that “tends to incite an immediate breach of the peace,” through the use of “personally abusive” language that “when addressed to the ordinary citizen, is, as a matter of common knowledge, inherently likely to provoke a violent reaction.” Certainly, all racist, misogynist, homophobic, and other slurs could be folded into this category and prohibited by social media sites.
  • Incitement. The Supreme Court has long held that advocating the use of force is unprotected speech under the First Amendment when it is “directed to inciting or producing imminent lawless action.” The word “imminent” has been the subject of further litigation, generally requiring a “clear and present danger.” On social media sites, where it is so easy to organize actions that spill over into the physical world, banning the advocacy of violence on incitement grounds would generate little controversy, and internet platforms should be able to do it.
  • False Statements of Fact. The Court explicitly held in 1974 that “there is no constitutional value in false statements or fact.” Social media platforms can therefore ban provably false information under First Amendment standards. But “provably false” is the key ingredient here. The Hunter Biden laptop story certainly has some sketchy origins, but no one has debunked the authenticity of the emails or connected their provenance to a hack. The leaders of Twitter and Facebook may believe their editorial judgment is superior to that of the New York Post, and they could well be right. But sitting in judgment of the editorial decisions made by other media entities really does make them publishers and not distributors. The Post is hardly a blog that launched last month. It was founded in 1803 by Alexander Hamilton. It is entitled to a presumption of accuracy in its reporting until further fact-checking proves otherwise, a point that Dorsey and Zuckerberg all but conceded at the hearing. Social media companies may reserve the right to prohibit falsehoods but must be extremely careful how they do it, in line with First Amendment cases defining this concept.
  • Defamation. Defamation is the unjust harming of another’s reputation, and it is a specific type of false statement unprotected by the First Amendment. Social media sites should have every right to take down this sort of content. While defamation can be difficult to prove in court, it is unrealistic to hold social media sites to this evidentiary standard when a user files a complaint. Rather the “good faith” requirement of Section 230 is all that should be required, allowing social media sites to take down content that is facially harmful to another user’s reputation when that user complains. Bullying and harassment could similarly be dealt with under this standard.
  • Fraud. Another kind of false statement is fraud. There is no right under the First Amendment to impersonate someone else or deceptively amplify one’s views through fake accounts. Social media policies can certainly require authenticity. If the Russians want to post propaganda on social media, let the GRU create its own clearly identified account for doing so. Few people would follow them or believe their misinformation if they knew the true source of it. Nobody seems to be advocating that Twitter take down the Wolf Warrior of Beijing or Iran’s Ayatollah, despite their highly offensive posts to our ears, so the problem with “foreign interference” seems not to be the content per se but rather the disguising of its true origin. Even under a permissive First Amendment policy, there would be no license for foreign interference on social media.
  • Illegally Obtained Materials (Hacking). In the same way that the First Amendment is not intended to protect fraud, it is not intended to protect theft. A series of Supreme Court and federal circuit court cases have created a spectrum of legal liability for journalistic use of illegally obtained materials. Bartnicki v. Vopper sets forth three requirements that must be met for First Amendment protection: (1) the media outlet played no role in the illegal interception; (2) the media received the information lawfully; and (3) the issue was a matter of public concern. The NY Post’s story appears to meet the Bartnicki test, which is partly why Dorsey’s justification for censoring it on anti-hacking grounds fell so flat. If Twitter wants to discourage hacked material, that’s a reasonable objective, but it should reconstitute its policy in compliance with Bartnicki rather than making up its own test.
  • Obscenity. Obscenity has a famously shifty and subjective definition, summarized by Justice Potter Stewart as, “I know it when I see it” in the 1973 case that established “prevailing community standards” as the basis for determining when speech is obscene and therefore not entitled to First Amendment protection. Facebook and Twitter are communities entitled to establish prevailing standards of their own. Indeed, when it comes to pornography, Facebook has adopted a hard line against it, while Twitter has taken a more libertarian view. For all the criticism each endures, no one seems to challenge their right to set their own policies in this area. They should adapt their user rules to conform to reasonable community standards that are in line with the First Amendment.

Taken together, this set of policies would give Twitter and Facebook substantial latitude to restrict almost all of the problematic speech on social networks on solidly First Amendment grounds. So why not do it? Dorsey and Zuckerberg endured a nearly four-hour lesson last week about how impossible it will be for them to ever satisfy the contradictory demands of Republicans and Democrats while claiming the power to censor according to their own policies. Yet they repeatedly argued for retaining this power and opposed the removal of “otherwise objectionable” from Section 230’s speech-exclusion language.

Instead, they repeatedly promised greater “transparency” over their censorship decisions and the workings of their algorithms. Several senators pointed out that they made similar promises at a congressional hearing in June of 2019, and sixteen months later had little progress to report. More importantly, transparency isn’t the main issue, censorship is. Sure, more transparency about how their algorithms impact the flow of content on their platforms would be useful. However, transparent censorship is still censorship, and it is still an appropriation of power to themselves that no one elected them to possess. Their failure to propose any reform of this power at the hearing was a missed opportunity and unresponsive to the real concerns.

A Path to Peace

Fortunately, Dorsey and Zuckerberg will get a second chance to sue for peace when they are hauled before the Senate Judiciary Committee on November 17 to discuss how Section 230 could be meaningfully reformed. This could be an important moment for the entire internet and Dorsey and Zuckerberg should meet it by offering the terms of a peace treaty in which they pledge the following:

  1. We will protect any speech that is protected under the First Amendment. Rather than trying to improvise our own policies with respect to speech, we will instead focus on operationalizing these First Amendment principles based upon established Supreme Court case law.
  2. We will double down on our authenticity rules and procedures. To say anything on our platforms, you must really be who you say you are. We will crack down on impersonation, fake accounts, “sock puppets”, or any kind of foreign interference.
  3. We will provide users with the tools they need to curate content for themselves. For instance, we will give users the ability to hide or delete offensive comments to their posts. By having users regulate speech instead of us, it preserves neutrality because all users have an equal opportunity to decide for themselves what speech they do and don’t want to encounter on our platforms. User curation will reduce offensive content, without the need for censorship.
  4. We would welcome further guidance from the Supreme Court as to when and how hate speech can be regulated. If the Supreme Court were to define a segment of hate speech not currently covered under the existing First Amendment exceptions and declare it unprotected, then we would regulate that speech on our platforms according to their guidance. But we realize that as the de facto public square, we are better off adopting the First Amendment as our standard than trying to improvise our own, and don’t want to arrogantly substitute our judgment for that of the Court, who has a more than two-century head start on us in grappling with difficult speech issues. The Court’s history shows that there will always be hard cases when applying First Amendment principles, but at least we can all agree on what those principles are.
  5. While we are proposing to abide by First Amendment guidelines voluntarily, we understand if you insist on binding our hands by altering the language in Section 230(C)(2) to require it. This will put the First Amendment in its rightful place as the arbiter of acceptable speech in the public square. Once those changes are made, we hope you will agree that the rest of Section 230 has been a boon to online innovation and diversity that is worth keeping in place. To paraphrase Bill Clinton on the subject of affirmative action, “mend it, don’t end it.”

Conclusion

Such a set of proposals could really improve the perception of Facebook and Twitter, in Washington and throughout the United States. In the short run, it will lower the political temperature and tamp down calls for Section 230 repeal. Long term, it will make life as a social media platform so much easier. No more will the C-suites of these companies have to white-knuckle it through every election cycle, worried that some decision they made or didn’t make will get them blamed for the end of democracy. Yes, politicians will still “work the refs,” and ambiguous cases will present themselves, but if the First Amendment always gets the last word, everyone will have to respect that. What is the point of Twitter and Facebook taking so much flak for creating their own standard of protected speech when one already exists that has been chiseled into the granite of American custom, tradition, and law? However this election turns out, the political seas will remain turbulent for some time. For internet platforms adrift in the storm, grabbing onto the First Amendment could be a life preserver.

--

--

David Sacks
Craft Ventures

General Partner and Co-Founder of Craft Ventures. Previously: Founder/CEO of Yammer. Original COO of PayPal.