The Breakdown: Daphne Keller explains the Communications Decency Act

Daphne Keller discusses CDA 230, the executive order, and content moderation

Berkman Klein Center
Berkman Klein Center Collection
11 min readAug 13, 2020

--

Daphne Keller (left) joined Oumou Ly (right) for the latest episode of The Breakdown.

In this episode of The Breakdown, Oumou Ly is joined by Daphne Keller of the Stanford Cyber Policy Center to discuss the Section 230 of the Communications Decency Act, content moderation and Big Tech platforms, and recent events that propelled them into the spotlight in recent months.

Section 230 of the Communications Decency Act, or “The Twenty-Six Words That Created The Internet,” provides platforms legal immunity for third party speech — including by their users. It came under fire recently when President Donald Trump signed an executive order to limit protections for social media companies.

Watch the latest episode of The Breakdown from The Berkman Klein Center

Read the transcript, which has been lightly edited for clarity.

Oumou Ly (OL): Welcome to the Breakdown. My name is Oumou; I’m a staff fellow on the Berkman Klein Center’s Assembly: Disinformation program. Our topic of discussion today is CDA 230, Section 230 of the Communications Decency Act, otherwise known as “The Twenty-Six Words That Created The Internet.” Today I’m joined by Daphne Keller from the Stanford Cyber Policy Center.

Thank you for being with us today Daphne, I appreciate it, especially this conversation will help to unpack what has turned out to be such a huge and maybe consequential issue for the November election and certainly for technology platforms and all of us who care and think about this information really critically.

One of the first questions I have for you is a basic one, can you tell us a little bit about CDA 230 and why it’s referred to as The Twenty Six Words That Started The Internet?

Daphne Keller (DK): Sure. So first, I strongly recommend Jeff Kosseff’s book, which coined that Twenty Six words phrase, it is a great history of CDA 230, and it’s very narrative.

So Intermediary Liability Law is the law that tells platforms what legal responsibilities they have for the speech and content posted by their users. And US law falls into three buckets. There’s a big bucket, which is about copyright, and there the law in point is the Digital Millennium Copyright Act, the DMCA, and it has this very choreographed notice and takedown process.

The other big bucket that doesn’t get a lot of attention is federal criminal law. There’s no special immunity for platforms for federal criminal law crimes. So if what you’re talking about is things like child sexual abuse material, material of support of terrorism, those things, the regular law applies. There is no immunity under CDA 230 or anything else.

And then the last big bucket, the one we’re here to talk about today is CDA 230, which was enacted in 1996 as part of a big package of legislation. Some of which was subsequently struck down by the Supreme Court, leaving CDA 230 standing as the law of the land. And it’s actually a really simple law, even though it’s so widely misunderstood that there’s now a Twitter account, a Bad Section 230 Takes, just to retweet all the misrepresentations of it that come along.

“Broadly speaking, the Internet could not exist the way we know it without something like CDA 230”

But what it says is, first, platforms are not liable for their users’ speech. Again, for the category of claims that are covered, so this isn’t about terrorism, child, sex abuse material, et cetera. But for things like state law defamation claims, platforms are not liable for their users’ speech. And the second thing it says is also platforms are not liable for acting in good faith to moderate content. So to enforce their own policies against content they consider objectionable.

And this, that second prong was very much part of what comes Congress was trying to accomplish with this law. They wanted to make sure that platforms could adopt what we now think of as terms of service or community guidelines and could enforce rules against hateful speech or bullying or pornography, or just the broad range of bad human behavior that most people don’t want to see on platforms. And the key thing that Congress realized, because they had experience with a couple of cases that had just passed that happened at the time, was that if you want platforms to moderate, you need to give them both of those immunities. You can’t just say you’re free to moderate, go do it. You have to also say, and if you undertake to moderate, but you miss something and there’s defamation… still on the platform or whatever, the fact that you tried to moderate won’t be held against you.

And this was really important to Congress because there had just been a case where a platform that tried to moderate was tagged as acting like an editor or a publisher and therefore facing potential liability. That’s the core of CDA 230. And I can talk more if it’s helpful about the things people get confused about, like the widespread belief that platforms are somehow supposed to be neutral, which is —

OL: Well, would you please say something about that.

DK: Yeah. Congress had this intention to get platforms to moderate. They did not want them to be neutral; they wanted the opposite. But I think a lot of people find it intuitive to say, well, it must be that platforms have to be neutral. And I think that intuition comes from a pre-Internet media environment where everything was either a common carrier, like a telephone, just interconnecting everything and letting everything flow freely. Or it was like NBC News or The New York Times — it was heavily edited, and the editor clearly was responsible for everything that the reporters put in there. And those two models don’t work for the Internet. If we still have just those two models today, we would still have only a very tiny number of elites with access to the microphone.

And everybody else would still not have the ability to broadcast our voices on things like Twitter or YouTube or whatever that we have today. And I think that’s not what anybody wants. What people generally want is they do want to be able to speak on the Internet without platform lawyers checking everything they say before it goes live. We want that. And we also — generally — also want platforms to moderate. We want them to take down offensive or obnoxious or hateful or dangerous but legal speech. And so 230 is the law that allows both of those things to happen at once.

OL: Okay. Daphne, can you talk a little bit about the two different types of immunity that are outlined under CDA 230 we call them shorthand (c )(1) and (c ) (2)?

DK: Sure. So in the super shorthand, (c )(1) is immunity for leaving content up, and (c ) (2) is immunity for taking content down.

OL: Yeah.

DK: So most of the litigation that we’ve seen historically under the CDA is about (c )(1). It’s often really disturbing cases where something terrible happened to someone on the Internet, and a speech defaming them was left out, or speech threatening them was left up, or they continue to face things that were illegal. So those are cases about (c )(1). If the platform leaves that stuff up, are they liable? The second prong (c )(2) just hasn’t had nearly as much attention over the years until now. But that’s the one that says platforms can choose their own content moderation policy that they’re not liable for choosing to take down content they deem objectionable as long as they are acting in good faith.

And that’s the problem that does have this good faith requirement. And part of what the executive order attempts is to require companies to meet the good faith requirement in order to qualify for immunities. If someone can show that you are not acting in good faith, then you lose this much more economically consequential immunity under (c )(1) for contents that’s on your platform that’s illegal.

And the biggest concern I think for many people there is if this economically essential immunity is dependent on some government agency determining whether you acted in good faith. That introduces just a ton of room for politics because my idea of what’s good faith won’t be your idea of what’s good faith, won’t be Attorney General Barr’s idea of what’s good faith. And so having something where political appointees, in particular, get to decide what constitutes good faith and then all of your immunities hanging in the balance is really frightening for companies.

And, interestingly, today we see Republicans calling for a fairness doctrine for the Internet calling for a requirement of good faith or fairness in content moderation. But for a generation, it was literally part of the GOP platform every year to oppose the fairness doctrine that was enforced for broadcast by the FCC. President Reagan said it was unconstitutional. This was just like a core conservative critique of big government suppressing speech for decades, and now it has become their critique, and they’re asking for state regulation of platforms.

OL: That is so interesting to me, both that and the fact that CDA 230 in so many ways is what allows Donald Trump’s Twitter account to stay up. It’s really, really interesting that the GOP has decided to rail against it.

DK: It’s fascinating.

OL: So just recently, the president signed an executive order concerning CDA 230 pretty directly. Can you talk a little bit about what the executive order does?

DK: Sure. So I think I wanted to just start at a super high level with the executive order in the day or so after it came out, I had multiple people from around the world reach out to me and be like, this is like what happened in Venezuela when Chavez started shutting down the radio station.

It has this resonance of like, there is a political leader trying to punish speech platforms for their editorial policies. And that — before you even get into the weeds — that high-level impact of it is really important to pay attention to. And that is the reason why [the] CDT (the Center for Democracy and Technology) in DC has filed a First Amendment case saying this whole thing just can’t stand, we’ll see what happens with that case.

But, and there again like that’s not a bad idea, but then it leads to things in the executive order that I think don’t work. So then there are also in the executive order for other things that might be big deals. So one is that [the] DOJ has instructed to draft legislation to change 230. So eventually, that will come along, and presumably, it will track the very long list of ideas that are in the DOJ report that came out this week. [Editor’s note: this interview was recorded on June 18, 2020] A second is it instructs federal agencies to interpret 230 in the way that the executive order does.

This way that I think is not supported by the statute that takes the good faith requirement and applies it in places it’s not written in the statute. Nobody’s quite sure what that means because there just aren’t that many situations where federal agencies care about 230, but we’ll see what comes out of that. A third is that Attorney General Barr of the DOJ is supposed to convene state attorneys general to look at a long list of complaints. And this is like, if you look at it, if you’re an Internet policy nerd, it’s just all the hot button issues… are fact-checkers biased? Can algorithmic moderation be biased? And, well, it can. How can you regulate that? You will recognize these things if you look at the list.

And then the fourth one, and this is one that I think deserves a lot of attention is that DOJ is supposed to review whether platforms, particular platforms are quote problematic vehicles for government speech due to viewpoint discrimination, unquote. And then, based on that, look into whether they can carry federally funded ads. This is I think for most platforms the ads dollars part is not that big a deal, but being on a federal government block list of platforms with disapproved editorial policies, just like has this McCarthyist feeling.

OL: Can you talk a little bit about the role of CDA in relation to the business models that the platforms run?

DK: Sure. So broadly speaking, the Internet could not exist the way we know it without something like CDA 230. And that’s not just about the Facebooks of the world, that’s about everything all up and down the technical stack DNS providers, CloudFlare, Amazon Web Services is another backend web hosting. And also tons of little companies, the knitting blog that permits comments or the farm equipment seller that has user feedback. All of those are possible because of CDA 230. And if you pull CDA 230 out of the picture, it’s just very hard to imagine the counterfactual of how American Internet technology and companies would have evolved.

They would have evolved somehow and, presumably, the counterfactual is we would have something like what the EU has, which boils down to a notice and takedown model for every kind of legal claim. But they’d barely have an Internet economy for these kinds of companies. There’s a reason that things developed the way that they did.

OL: Yeah. Do you think that there’s any, maybe not what you think, but I’m sure that we can all agree this is likely to be the case, if the liability shared with that 230 offers platforms is removed, how would that change the way that platforms approach content moderation?

DK: Well, I think a lot of little companies would just get out of the business entirely. And so there’s an advocacy group in DC called Engine, which represents startups and small companies, and they put together a really interesting two-pager on the actual cost of defending even frivolous claims in a world with CDA 230 and in a world without CDA 230. And it’s basically, you’re looking at 10 to 30 thousand dollars in the best-case scenario for a case that goes away very, very quickly even now. And that’s not a cost that small companies want to incur. And the investors there are all these surveys of investors saying, I don’t want to invest in new platforms to challenge today’s incumbents if they’re in a state of legal uncertainty where they could be liable for something at any time. So I think you just eliminate a big swath of the parts of both the existing parts of the Internet that policymakers don’t pay any attention to.

You make them very, very vulnerable, and some of them go away, and that’s troubling, and you create a lot of problems for any newcomers who would actually challenge today’s incumbents and try to rival them in serious user-generated content hosting services.

For the big platforms, for Facebook, for YouTube they’ll survive somehow, they’d change their business model, they probably … the easiest thing to do is, you use their terms of service to prohibit a whole lot more and then just like take down a huge swath, so you’re not facing much legal risk.

OL: Yeah. It’s hard to imagine living in that kind of a world.

DK: It is, it is.

OL: Yeah. Thank you so much for joining me today, Daphne. This was a great and enlightening conversation, and I’m sure our viewers will enjoy it.

DK: Thank you for having me.

OL: Thanks.

--

--

Berkman Klein Center
Berkman Klein Center Collection

The Berkman Klein Center for Internet & Society at Harvard University was founded to explore cyberspace, share in its study, and help pioneer its development.