Protecting Free Speech Compels Some Form of Social Media Regulation

Given the profound challenges posed by social media, corrective measures need to go beyond “deplatforming” bad apples.

RAND
RAND
5 min readNov 28, 2023

--

by Luke J. Matthews, Heather J. Williams, Alexandra T. Evans

Closeup of phone screen showing social media app icons. Photo by P. Kijsanayothin/Getty Images
Photo by P. Kijsanayothin/Getty Images

Throughout U.S. history, Americans have upheld free speech protections as critical to the defense of democracy. But as an online extremist ecosystem has spread across social media, claims to free speech also have shielded actors that threaten democratic civil society. In just the last two years, social media platforms were used to organize a seditious conspiracy, to advance white supremacist ideas, and to sow disinformation that weakens both civil society and national security.

Given the profound challenges posed by social media, corrective measures need to go beyond “deplatforming” bad apples. But how can the United States make such a structural change without compromising the democratic tradition of free speech?

The leading policy recommendations can be grouped into three categories: regulation by a federal agency, holding platforms liable for their content in civil court, or requiring data transparency and reporting.

None of these would be easy. All would require new legislation that could withstand Supreme Court scrutiny. And yet, without some kind of government intervention, social media companies are unlikely to self-regulate effectively.

Given the profound challenges posed by social media, corrective measures need to go beyond “deplatforming” bad apples.

Option 1 — regulation — would likely come in the form of a new law from Congress that would give authority over social media to an agency housed within the executive branch. Social media might be seen to naturally fall within the purview of the Federal Communications Commission (FCC). However, the U.S. Supreme Court ruling in Reno v. ACLU (1997) established, among other things, that internet companies are not broadcasters. Internet companies have also been granted exemption from the Communications Decency Act of 1996, the law under which the FCC would likely regulate extremist content.

Alternately, regulatory authority might be given to the Federal Trade Commission’s Bureau of Consumer Protection (BCP). The logic here is that social media is a consumer product and could fall under BCP’s jurisdiction over product safety. Such jurisdiction might be too narrow for the problems at hand, however. The need for social media regulation extends beyond physical safety, financial abuse, or even mental wellness; rather it extends into the realms of protecting civil society and national security. If Congress wants oversight of social media via a federal agency, the most direct — but politically complex — path might be to establish a wholly new agency under a new statute.

A second, frequently discussed, option is to revise Section 230 of the Communications Decency Act (CDA) to establish that social media companies can be held liable in court for harms caused by content on their platforms. Proponents argue that this option would compel greater industry self-regulation — either to proactively avoid the risk of lawsuits or as a consequence of lawsuits and judicial rulings.

But this option would run headlong into another facet of that Reno v. ACLU ruling, which held that internet companies are fundamentally unlike traditional publishers. Because internet companies at the time of the ruling — think early blogging and AOL — did not pick and choose who was authoring posts or typing away in chat rooms, the court said they weren’t legally responsible for their content in the same way as The New York Times or Washington Post, which make decisions to publish (or not) each article.

It may be time to revisit this. The internet today is not that of the 1990s. Although individual users still create content on social media, today it is the companies’ algorithms that are substantially responsible for what is, and is not, amplified on their platforms. At what point should that make them liable? Individuals can sue traditional media companies for damages because the company promulgated the defamatory content. Are social media platforms any less responsible for viral and promoted content?

Finally, Congress could require greater transparency by passing laws that require social media platforms to make data available to third-party researchers and evaluators. In theory, scrutiny by independent researchers might encourage social media companies to better contain the spread of malicious information. One specific bill along these lines is the Platform Accountability and Transparency Act (PATA), advanced in 2021 by Senators Chris Coons (D-Delaware), Rob Portman (R-Ohio), and Amy Klobuchar (D-Minnesota). PATA would require the National Science Foundation to establish a review process to approve social media researchers, who would have to be affiliated with academic institutions. Once approved, researchers would be granted access to de-identified aggregate data from social media companies with greater than 50 million unique monthly users. Companies that fail to comply with these requests would become liable as publishers under CDA’s Section 230. Platforms with fewer than 50 million users would have no new requirements placed on them under PATA.

Although perhaps useful as a first step, on its own PATA seems insufficient to reduce misinformation, online extremism, and other threats to civil society. As a policy tool, transparency ultimately relies on companies’ will to regulate themselves for the sake of preventing embarrassment, conforming to moral norms, or valuing the public good. This has not worked in other contexts, such as for financial institutions.

These types of transparency requirements, however, could be complementary to the other policy options. Ensuring that social media data are discoverable to a federal regulator or by private parties in civil lawsuits is useful. Just as banks are required to report currency transactions over $10,000 (PDF), social media companies could be required to report, with original raw data, the content most amplified by their algorithms every week.

Social media companies could be required to report, with original raw data, the content most amplified by their algorithms every week.

Those against regulating social media companies make a variety of arguments: It could impede free speech. It will put undue burdens on small providers. It will incentivize unnecessary and aggressive content removal or none at all. Repealing Section 230 could cost the American economy money and jobs. In many of these cases, there are potential rebuttals. For instance, regulatory requirements could vary based on platform size. Or companies could be held responsible for what their algorithms promote, but not all content posted.

An objective analysis of the costs and benefits of these regulatory options is long overdue in the United States. The teenagers who set up Myspace accounts are middle-aged; the once fledging social media companies are now owned by billionaires; and the internet is now older than the youngest member of Congress. For evidence that the status quo isn’t working, and that social media isn’t getting better on its own, just look online.

Luke J. Matthews is a behavioral and social scientist at RAND. Heather J. Williams is a senior policy researcher at RAND and associate director of its International Security and Defense Policy Program. Alexandra T. Evans is a policy researcher at RAND.

This originally appeared on The RAND Blog on October 20, 2023.

--

--

RAND
RAND

We help improve policy and decisionmaking through research and analysis. We are nonprofit, nonpartisan, and committed to the public interest.