The First Amendment, the Information Ecosystem, and Artificial intelligence

Jess Miers
Golden Data
Published in
15 min readApr 15, 2020
Belva A. Lockwood (1830–1917), a trained attorney who was the first woman admitted to practice before the U.S. Supreme Court.

Starting this summer, I’ll have the awesome opportunity to do research at the UCLA Institute for Technology, Law and Policy. My research will be focused on the intersection of rights of publicity law and Section 230. Recently, I sat down with my research advisor, John Villasenor, to conduct this interview for a Computer Science class at Northwestern. You can listen to the discussion here. Below are the notes I prepared in response to his questions.

1. One of the freedoms conferred by the First Amendment is that “Congress shall make no law .. . abridging the freedom of speech.” Of course, that means that the First Amendment constrains the government, not private companies such as Twitter, Facebook, Yelp, YouTube, and so on. Yet in discussions about user-posted content on these and other sites, the First Amendment often comes up — often brought up by the companies themselves. Can you explain a bit why freedom of expression is such a core part of the conversation regarding content moderation, particularly given that internet companies are not under any constitutional obligation to provide it?

In my opinion, there are two camps when it comes to the First Amendment’s role in online content moderation. The first is the Internet companies and their right to what we often refer to as editorial discretion. On the other side we have people who equate this editorial discretion with censorship. These are usually the folks who often get upset when an Internet service, like Twitter for example, removes their content (or speech) or blocks their ability to continue contributing to the online conversation.

Distinguishing between editorial discretion and censorship is a crucial part of the First Amendment debate. It’s important to remember that our constitutional rights, like the First Amendment’s freedom of speech, only go as far as restricting the government from infringement. So when we talk about censorship, we’re really only talking about instances when the government, or a state actor held out by the government, restricts or curbs our speech or ability to speak in some way. Notably, Internet services are not the government and not state actors held out by the government so we as users technically have no real speech rights in using these services.

Hence, when an Internet service makes a decision to restrict content, they’re doing so in their editorial capacity. Just like traditional print media like newspapers have the right to control their own press, Internet companies too enjoy that freedom when it comes to moderating content. And if the government were to start regulating these content moderation decisions, then they would be violating the service’s First Amendment right to this editorial discretion.

Importantly, these services need their editorial discretion arguably even more so than traditional print media simply because of the massive amounts of user-generated content they encounter and must deal with on a regular basis. That’s why the First Amendment is such a core part of this discussion.

2. I think it’s safe to assume that most engineering students haven’t studied CDA 230. Could you give a brief primer on what it is, how it arose, and the role it plays?

Section 230 of the Communications Decency Act essentially says that websites are not responsible for third party content. So for example if I were to post something defamatory about you on Twitter or Facebook, Section 230 would shield Facebook or Twitter from any liability for my post. Now there are some exceptions for federal criminal law. Anything that can be prosecuted by the DOJ is outside the scope of 230’s protection (Ex: child pornography).

Importantly, Section 230 also shields these services from liability for their content moderation decisions as well. So if a site decides to take down lawful but awful content or even if a site decides to ban a user entirely, Section 230 will protect these decisions as well.

There are two cases from the 90’s that tee’d up Section 230’s enactment. The first case involved a famous Internet access/service provider called CompuServe. CompuServe sold both Internet connectivity and also hosted subscriber-based content on various web forums. One of these forums hosted a daily newsletter ironically known as Rumorville. The rest is probably obvious. A defamatory article appeared in Rumorville and a lawsuit was launched against CompuServe. Interestingly, the court held that CompuServe was not liable for the defamatory posting because CompuServe never took steps to moderate the content on their forums so there was no way that CompuServe could have known about the post.

Fast forward to another case involving the same sort of fact pattern but against a CompuServe competitor known as Prodigy. Same story, Prodigy was sued for a defamatory posting. However, the result was concerningly different. Here the court held that Prodigy was liable for the posting because Prodigy took steps to keep their service “family-friendly.” In doing so, Prodigy engaged in heavy content moderation which meant that Prodigy had an opportunity to know about the defamatory content.

Those two cases spawned what we call today “the Moderator’s Dilemma.” Internet services were sort of stuck. On the one hand they could escape liability like CompuServe by taking a complete hands-off moderation approach to the content on their sites. You can imagine how that might look today if websites decided to allow any and all types of content. On the other hand, Internet services could attempt to moderate their services knowing that if anything slipped through the cracks, they could be held liable like Prodigy. Today, this might look like Twitter pre-screening every Tweet before it’s posted. Imagine what that would do to your online overall experience. Both options were terrible and Congress recognized the need for a middle ground where services could moderate the content on their sites as they saw fit for their users. Hence, Section 230 was born.

Now you might be thinking, didn’t I just get done explaining that the First Amendment already protects their editorial discretion? What does Section 230 add? In reality, Section 230 is a lot stronger than the First Amendment. For starters, we think back to traditional print media. The First Amendment doesn’t protect a newspaper from a defamation lawsuit in the same way Section 230 protects websites. This has to do with the number of control newspapers can exhibit over their information dissemination than websites. Newspapers can pick and choose the content that’s published to their mass audiences. But most websites don’t have that luxury (and don’t really want that luxury) because it would change the way we enjoy the immediacy and information-at-our-fingertips that the online world is today.

But there are other reasons why Section 230 is stronger than the First Amendment. One major reason is Section 230’s inherent predictability. Lawsuits are really really really expensive and time-consuming. Section 230 gives services the necessary confidence to enter the market without the fear of being sued. The First Amendment does not have that same guarantee (those suits can be long and costly). This is the main reason why Section 230 is often credited with creating the Internet — because, without it, we wouldn’t have most of the services we rely on so heavily today.

If you’re interested in deep-diving more about 230 and the First Amendment, I highly recommend reading a paper written by Section 230 expert and my advisor at SCU Law, Professor Eric Goldman, entitled “Why Section 230 is Better than the First Amendment.”

3. How has the climate regarding CDA 230 changed in the last year or two? My impression is that there has been increasing pressure, both from the political right and the political left, to weaken CDA 230. What has spurred that dialog to occur, what are some of the arguments in favor of weakening CDA 230 and some of the rebuttals to those arguments?

Congress, and even many of us users, have fallen out of love with the Internet. Now perhaps that sentiment has changed in light of recent unfortunate events, but for the most part, the current bipartisan animosity towards Section 230 is largely due to this burnt-out attitude about the online world.

In some ways, I can’t blame people who feel that way about the Internet. Between fake news and misinformation, the 2016 election, concerns about sex trafficking, the opioid crisis, defective products from online marketplaces, and this general distaste towards Big Tech, the hate towards tech’s “gift” that is Section 230 is not in any way surprising.

Democrats typically disfavor Section 230 because they mistakenly believe it’s a gift that the tech companies abuse to shield themselves from all liability without any responsibility. This stems from the sword and shield idea — that Section 230 is both a sword for eliminating bad content but also a shield for protecting those moderation decisions. Democrats typically think tech companies only rely on the shield without using the sword. We see this when it comes to fighting for sex trafficking victims, or children and their online safety, or misinformation especially with regard to our elections.

On the flip side, Republicans typically believe that tech companies rely too heavily on the sword to “censor” conservative speech. Ted Cruz for example is infamous for his messaging about how Section 230 requires tech companies to maintain “neutral public forums.” Of course, from our discussion about the First Amendment, we now understand why that’s fundamentally incorrect. There has never been any concrete evidence that tech companies are biased against conservatives but even if there was, it wouldn’t matter. As private companies, they are free to moderate content as they deem appropriate. And I’m sorry to say this but oftentimes, it’s the conservative rhetoric, such as graphic messaging about abortion, that services deem inappropriate for their service and users.

Regardless, Section 230 is not the cause of the Internet’s awfulness, but rather the solution. Awful content has been around since before Section 230. What changed is now services have the necessary tooling to combat said content. Section 230 affords them the necessary protection to do so.

Importantly Section 230 is also not a gift to big tech. It’s a necessary immunity that continues to allow the Internet to thrive and exist in its current form. I think we often forget that the Internet is not solely Facebook and Google. There are so many small Internet services and potential market competitors that rely on 230 to even exist in the marketplace — to them, 230 is not a gift it’s a lifeline.

The last major argument I hear against Section 230 is that it’s an outdated law drafted for the Internet problems of the 90’s and thus needs an update for modern-day issues. I disagree. This notion assumes that the Internet has reached its “end game” and that we have encountered every possible issue in the realm of user-generated content. But decades of online life have proven that the Internet is dynamic and so is its content. Every day we find ways to push the envelope on “creative” (or awful) content. The beauty of Section 230 is that it was written with this dynamism in mind, allowing for new types of websites and content to thrive throughout the decades. The law’s success stems from the fact that it wasn’t written to address one type of issue at that time. The drafters understood it would be impossible to write a law tailored to cure today’s online harms while simultaneously correcting for tomorrow’s; unless of course, we’re okay with assuming we really have reached the Internet’s end.

4. Given that Twitter, Facebook, Yelp, YouTube, etc. operate internationally, how should they respond to pressure from governments that might want them to take down content — such as criticism of a political leader — that is perfectly lawful in the US but that might violate laws in another country?

This is an incredibly challenging issue for American Internet services. Companies might consider the “American centric-approach.” This is the idea that a U.S. company is bound by U.S. law and therefore, Section 230 and the First Amendment take precedence. But there are quite a few downsides to this approach. It might place their overseas employees in risky legal situations for starters. Not to mention, it’s simply not the best look when you’re trying to cater to a global audience.

Instead, Internet companies might first look to their own branding and messaging when it comes to making these tougher calls. For example, Twitter is a service that prides itself on open communication, the availability of free-flowing content and information, and free speech. Twitter might be incredibly hesitant to remove content, regardless of the laws of other countries, if these principles outweigh the “harms” of keeping such content available. Other Internet services might take different approaches. That’s why oftentimes we see a schism in the way these services decide to moderate political advertising or more recently, sensitive health crisis content.

In an effort to avoid the “American-centric” approach, these companies might also look for guidance from the International Covenant on Civil and Political Rights (ICCPR). The ICCPR is a multinational treaty adopted by the UN to safeguard the civil and political rights of its member’s citizens. The ICCPR is especially important when it comes to takedown demands from restrictive and “non-free” countries like Turkey. Turkey is often one of the biggest offenders when it comes to free expression (where any and almost all types of content are considered illegal) so Internet companies may place more emphasis on keeping content available depending on where the takedown request is coming from.

Internet companies might also consider content moderation strategies that both preserve speech and adhere to another country’s laws. Geotargeting removals so that content is unavailable in one country yet available elsewhere is one solution. Unfortunately though, this approach might exacerbate the “splinternet;” the splintering of online experiences across borders. Internet services should be wary of this approach where it might be crucial for content to remain available in the offended country (think the Twitter-Egypt Revolution).

Internet companies must be thoughtful about all of these approaches to global moderation; especially when lives and democracies are at risk.

4a: (omitted from the podcast) And how does geographic location play into the previous question? For example, from a policy standpoint, if a person in country X posts an opinion about that country’s leader that is unlawful in that country, is that different than if a person in the US posts an opinion about the leader of country X that is unlawful in country X?

We place a lot of value on this type of speech here in the U.S. (now more than ever…). So if someone here in the U.S. were to post an opinion about the leader of country X, usually it is protected free speech.

There’s a scarier outcome for citizens of other countries though. For example, if a person in country X posts an “illegal” opinion about a country X leader, it could very well result in prosecution, incarceration, or in severe cases, death. This is a concern Internet companies do not take lightly which is often why services refuse to unmask users upon request from government officials.

Internet companies also must be mindful of their employees working overseas in other countries with less-than-free ideologies. Though the First Amendment and Section 230 do a good job in shielding United States citizens here, all bets are off under other jurisdictions. There have been many instances where overseas employees have been jailed when the Internet company refuses to comply with certain demands. This too becomes another variable in the content moderation equation.

5. Misinformation is an enormous concern on the internet. For example, in recent weeks, the social media companies have been working to take down false medical claims about purported cures for covid-19. There is also a lot of concern about misinformation in relation to political campaigns. Should addressing misinformation be handled organically through the market or is there a legislative or regulatory role here?

The market approach is the only option that makes sense, in general, considering the Internet’s nuances. Every service has a unique audience with unique needs. The way each Internet company has responded differently to the controversy of political advertising is illustrative. No regulation can properly and broadly address those needs. So many incentives exist for Internet companies to resolve issues (like rampant misinformation) themselves without the chilling threat of regulation. One obvious incentive is advertising/monetization. Put simply, advertisers prefer to advertise to general audiences. Awful content threatens this goal, driving advertisers away from degenerate and anti-social websites. Importantly, those advertisers capitalize on users seeing their ads. Hence, it’s crucial Internet companies cater and curtail content towards the right audience.

In considering a regulatory approach, I always ask, what problem do you aim to solve? It’s important to be specific. For example, if regulators hope to solve misinformation regarding the current health crisis, there are many questions they must be able to answer first in order to properly write law to address the problem:

  • What is health misinformation?
  • Specifically, what is misinformation versus fact when it comes to COVID-19, a virus many of our nation’s health experts and scientists are continuing to learn about themselves.
  • What are the “trusted” sources that should be allowed to generate COVID-19 content? The government? Media? What is a “trusted” news source?
  • What is acceptable COVID-19 content? Are parodies okay? Is “art” okay?

These questions fail to lend themselves to any concrete answers making regulatory line-drawing that much more complicated. But this is ultimately the problem with content moderation. Every time you try to draw boundaries on what is or isn’t bad content or misinformation, bad actors make moves to expand those boundaries. So should we require our regulators to draw new lines each time?

6. (omitted from the podcast) One of the problems with “misinformation” is that there is really a spectrum. One end of the spectrum are claims that are clearly pure misinformation (such as false medical cures), but then there are claims that are harder to characterize — such as the exaggerated claims the politicians routinely make. How should companies draw the line?

Internet companies can’t really draw those lines and shouldn’t be expected to do so. What they can do is be thoughtful about the real harms and consequences of bolstering misinformation on their services. Oftentimes, misinformation has the power to destroy lives and democracies. Internet companies should be aware of those harms and work to mitigate where they can.

The political advertising controversy is a fascinating example of the misinformation spectrum. Almost all of the big players have unique approaches. Facebook sits on one extreme allowing all kinds of political advertising with minimal oversight. Google takes a middle of the road approach in attempting to draw lines where appropriate. Twitter sits on the opposite side of the spectrum in banning all political advertising outright. Obviously, there is no right answer and no right place to draw the line.

But that’s also not the point. The interesting question is whether Internet services are getting significantly better at curbing misinformation and whether they’re overall positively or negatively impacting the current information ecosystem.

7. Let’s talk about artificial intelligence. What are some of the ways that you think that AI can be used to improve the integrity and efficiency of the information ecosystem? And what are some of the ways in which it might be used to undermine information integrity?

We’ve made great strides in using AI (and ML) to moderate content. Automated decision making is crucial for combatting intellectual property infringement (YouTube’s Content ID for example). AI might also be used to help detect sophisticated deepfakes or even non-consensual pornography. Natural language processing is often used to prevent anti-social conversation peaks and to restrict hate speech.

But these tools are also greatly limited. While automated decision making helps tackle the challenge of scale, it’s also prone to error. These errors then lead to the removal and chilling of legitimate speech and content. Unfortunately, we’re not at a point where artificial intelligence is sophisticated enough to understand the necessary context required of some fringe user-generated content. This usually results in content moderation blunders like the removal of the historic “Napalm girl” photo from Facebook. It might also mean unintentionally chilling marginalized communities (like Twitter’s transgender community). In that regard AI presents more harm to free speech and expression than human moderators.

And as AI becomes more sophisticated in removing deepfakes, so too do the deepfakes in order to escape AI’s reach. Just as we see in the cybersecurity arena, there will always exist this cat and mouse game between automated content decision making and the Internet’s bad actors.

8. There has been a lot of discussion about “deepfakes”, which are videos that are manipulated in very sophisticated ways by AI to portray people doing or saying things they never did or said. What are some of the issues that deepfakes raise for companies, and do you think that existing legal frameworks are sufficient to address them?

Deepfakes only serve to worsen the misinformation challenge that Internet companies already face. Being arguably one of the most litigious countries in the world, the U.S. is not lacking in legal frameworks to hold bad actors accountable. If the creator’s identity can be traced, there’s plenty of causes of action available to address deepfakes; defamation, publicity rights, false endorsement, privacy, and even intellectual property just to name a few (plus perhaps a slew of criminal charges too depending on the content).

Of course, the problem is less about the availability of a legal framework and more about the availability of the bad actor. That’s the biggest challenge deepfakes (like all anonymous awful content) might present as victims look to the courts for relief.

9. One of the things that Congress does is legislate. However, Congress can also play an important role in education and raising awareness. What are some non-legislative ways you’d like to see Congress get more engaged in the information ecosystem?

There are many ways Congress might take an active role without regulation. They might threaten litigation. They might resort to shaming tactics. They might open up more investigations (like the Backpage investigation) especially with regard to child sexual exploitation materials. They could hand down guidelines, like the Santa Clara Principles. They might even encourage a greater focus on Internet education and information vetting in our public schools.

Importantly, Congress can also bolster the information ecosystem by incentivizing Internet services to continue experimenting with content moderation techniques and improved transparency. The greatest incentive Internet companies currently have is their Section 230 immunity. By enshrining this immunity, instead of threatening it, Congress sends a loud and clear signal to Big Tech and their emerging competitors to continue adapting and evolving. With that Congress might also further incentivize Internet companies by enacting competition and innovation-friendly legislation like federal anti-SLAPP law.

Regardless, when it comes to addressing online harm, policymakers must be thoughtful about Internet and technology regulation; especially when considering amending existing and crucial laws like Section 230. I echo these seven principles for lawmakers regarding Liability for User-Generated Content Online.

--

--

Jess Miers
Golden Data

Senior Counsel, Legal Advocacy at Chamber of Progress