How this week’s Supreme Court cases could shape the future of the internet

Paolo Fornasini
5 min readFeb 18, 2023

--

In my first blog post of the year, I made a series of forecasts for 2023. Six weeks into the year, the first of those forecasts is rearing its technocratic head.

I predicted with 80% certainty:

“Section 230 gets another look in Congress”

But wait, Section 230 is going in front of the Supreme Court!

Yes, but — as I wrote:

“February is the month to watch SCOTUS, then Congress. Two major big tech cases (Twitter and Google) are finally scheduled for arguments, both regarding Section 230, and its implications for platform safety and responsibility. This is sure to kick up the usual dust of quotes and sound bites from members of congress wanting a new look at the statute — and I think they might get it. With Congress in a stalemate, this might be the one area where bipartisan committees can get something done next year.

Those arguments I mentioned will both be heard by the Supreme Court this week. However, it’s unlikely that the panoply of outstanding questions in the realm of social media, platforms and AI will be resolved right away. And that’s exactly why tech policy watchers should train their eyes on congress in the months to come. So what exactly do I think will happen?

First, a quick primer: what is section 230?

Section 230 is a law in the United States that says websites, like social media sites or online forums, are not responsible for things that users post on their websites. So, if someone posts something mean or illegal on a website, the website itself cannot be sued for that post. This is meant to encourage websites to allow people to post things online without being afraid of getting sued for what someone else posts. However, some people think this law gives websites too much power to allow harmful or illegal content to be posted without consequences. The provision has been instrumental in the growth of user-generated content online, as it allows websites to host a wide variety of content without fear of being sued for the actions of their users.

The Tip of the Iceberg: What the cases say on the surface

Case 1: Gonzalez v. Google (Feb 21)

The plaintiffs in this case allege that YouTube’s content recommendation algorithm played a major part in the radicalization of ISIS recruits who eventually brought about the death of their daughter in 2015. Traditionally, under Section 230, Google (YouTube’s owner) would be protected as a platform, and would not bear responsibility for the videos created by YouTube users, shown to other users.

Case 2: Twitter v. Taamneh (Feb 22)

The appeal case addresses the question of whether internet service providers, such as Twitter, can be held liable for hosting and recommending terrorism-related content posted by their users, under the Antiterrorism and Effective Death Penalty Act of 1996. Similar to the Google case, it has drawn scrutiny because it challenges the broad immunity that websites have enjoyed under Section 230. Twitter v. Taamneh does not nominally include Section 230, but it does keep the responsibility of platforms in the spotlight.

What this means in 2023

The Gonzalez v. Google case touches on a larger and more complex issue regarding the application of Section 230 to algorithmic recommendations made by online platforms. In this case, the court determined that Google was not responsible for defamatory content that appeared in its search results, but it did not address the question of whether Google’s algorithmic recommendations could be held liable for harmful or illegal content.

This is an important issue because the algorithmic recommendations made by online platforms, including those powered by AI, can have a significant impact on the content that users see and engage with online. If these recommendations are found to be responsible for harmful or illegal content, it could have far-reaching implications for the way online platforms operate and the legal protections they enjoy under Section 230. This is a particularly interesting and challenging issue for AI search, as there is very little legal precedent to draw from.

The political backdrop of all of this is that going after Big Tech is one of very few issues on which Democrats and Republicans can find common ground. Recent reporting suggests President Biden and a bipartisan group of senators are doing what they can to highlight the case and weigh in against the platforms. As mentioned above, I expect that the Supreme Court’s ruling will not be satisfactory to the government, and the other branches of government will capitalize on this moment to launch new debates on the matter. All the while, the DOJ’s investigations into virtually all of the major companies continue, primarily from an antitrust perspective.

But the implications of action would be massive.

While unlikely, if the Supreme Court were to hold that internet service providers can be held liable for hosting and recommending terrorism-related content posted by their users, it could lead to increased pressure on tech platforms to more aggressively police content on their platforms. But even if the companies are acquitted in the courts, legislative action that does not allow for nuance could be catastrophic. The likely outcome of shifting the policy burden onto platforms, given the cost of policing content, is a mix of extremely stringent blanket policies, and AI-based enforcement.

But AI-based enforcement brings problems of its own. First, its ability to detect problematic behavior is not 100%, at least not yet. Second, AI itself is a sort of platform. What will it mean if ChatGPT influences its users down similar paths we have seen on YouTube and Twitter? And what happens when you mix the two?

Finally, as is always the case in platform policy, this has major implications for how governments around the world intervene in tech. If the US sets a precedent that platforms must implement governmental policy more stringently, leaders around the world may respond by demanding similar accommodations. And companies will have to comply — even when those accommodations are highly political, or even unethical.

In sum, the implications of these cases for AI and the ability of governments to intervene in tech platforms are complex and will likely require a delicate balance between the need to address harmful content and the need to protect free speech and the independence of tech platforms. The Supreme Court’s decision is not poised to do this, so this week will be yet another opportunity for Congress to come up with a more nuanced approach. However, given their history of inaction and bumper sticker politics, I’m not sure how optimistic I am that a useful solution will emerge any time soon.

--

--

Paolo Fornasini

Founder @ Keye, masters fellow @ Wharton & Lauder Institute, ex-Google