Neil Turkewitz
6 min readFeb 19, 2020
Sculpture by Tara Donovan Photo ©2020 Neil Turkewitz

Some Personal Views on DOJ’s Conference on Section 230: The Internet is Not a Mirror

by Neil Turkewitz

The Department of Justice hosted an event today to begin consideration of the functioning of CDA Section 230 and whether, or how, it might be amended to better reflect the goals underlying its adoption in 1996 — principally to create incentives for platforms to take reasonable steps to prevent the distribution of harmful materials while also ensuring against the imposition of liability that might prevent the development of innovative web-based models of communication and commerce. I took some very sketchy notes, and while they — and the conference, are fresh in mind, I thought I might commit some of this in writing. I will here only refer to the presentations made by a few individuals. I apologize to the other participants. This is not intended, nor should it be understood, as suggesting that your interventions were somehow less valuable or memorable — it’s more that my note-taking was quite terrible and I think I can capture the overall gestalt and difference of views even by reference to a small subset of speakers.

The DOJ and FBI opened the meeting on a serious and informed note, highlighting how the protection afforded to fledgling companies in the mid-1990’s might no longer be appropriate now that they were titans of the business world with dominant positions that shaped societies, commerce and culture, and in light of the manifest internet-enabled harms that were broadly evident and which prevented us from realizing the potential of the internet to drive prosperity and to undermining public safety. They repeatedly underscored that the government’s obligation to ensure public safety was technology neutral, and that it was unacceptable to leave that mission to for-profit private companies. They noted that there were a variety of elements to be considered in relation to addressing the problems connected to the tech industry, including competition/anti-trust, but that the unique exception from ordinarily tort principles effected by Section 230 was something that needed to be addressed.

The first panel principally covered the history of Section 230, and how it arose to address the unfairness presented by the disparate decisions in Prodigy and Compuserve in which active moderation led to liability for Prodigy while Compuserve was relieved of any responsibility given their lack of any form of supervision or moderation. This led to the perverse incentive of encouraging willful blindness in order to avoid liability — an incentive which ran directly contrary to the interest of the public in preventing the dissemination of harmful materials. There was much discussion of how this tracked and/or diverged from the treatment of liability of booksellers as opposed to publishers.

But here’s the thing — and this is me talking, not the panelists. This narrative fails to consider two key things: (1) creating immunity for willful misconduct is not the only way of preventing incentives for knowledge. While the differing results in Compuserve and Prodigy did illustrate a problem — the problem it illustrated was that actual knowledge was a poor predicate for liability since it would create incentives for willful blindness. However, rather than exempting platforms with knowledge as Section 230 did, the legislator would have been better advised to expand the basis of liability to actors who knew or should have known of the harmful materials. The addition of the objective standard (should have known) would have removed immunity for platforms acting in bad faith, without regard to whether they had actual knowledge; and (2) the parallel of platforms to booksellers makes no sense. And if it made a modicum of sense in 1996, it makes absolutely no sense in 2020. The limited liability for booksellers was based on the observation that it was unreasonable to expect them to know of the defamatory nature of contents of books unless so advised. What were they to do? Read every book on their shelves? This same logic was applied to platforms but in a way that ignored (or failed to see) how technology would affect the ability of distributors to know exactly what they were selling. And as I said, if technology was not at that point in 1996, it certainly is now.

The second panel featured, and to my mind was dominated by, Professor Franks of the University of Miami and President of the Cyber Civil Rights Initiative. Professor Franks set out a broad and articulate challenge to the operation of Section 230 and to the notions of the “marketplace of ideas” frequently employed to defend the status quo and the role of a non-regulatory environment in defending free speech. She highlighted that the free-for-all created by lack of accountability favors the powerful, and results in a weakening of the public sphere through the withdrawal of the putative voices of women, minorities and other at-risk communities, reinforcing a “marketplace” consistent with the needs and desires of the powerful, and favoring white supremacy rather than some vague romantic notion of freedom of expression. Freedom must be rooted in experience, not understood merely as a theoretical construct. She powerfully reminded the audience of the very real consequences of the kind of lack of accountability engendered by Section 230, and captured the moment in 15 words (from the best of my recollection): “We are living in the world built by Section 230. It is not the answer.”

St. John’s University Professor Kate Klonick and Matt Schruers (CCIA), largely echoed in the next panel by Neil Chilson, Senior Research Fellow, Charles Koch Institute and Professor Eric Goldman of Santa Clara University, then offered what I found to be a very bizarre series of arguments focusing on a few issues: (1) tech platforms aren’t the actors performing the acts and shouldn’t be required to prevent the bad acts of third parties — consistent with legal traditions protecting Good Samaritans; (2) that tech was merely reflecting the problems in underlying society, and that if anything, they were useful in providing transparency and a means to mediate misunderstandings; and (3) that losing Section 230 would create a “moderators dilemma” leading to the overuse of takedown and a consequent narrowing of public-facing discourse.

Since this is already getting long, I’ll address these quickly. (1) and (2) are in fact related, and stem from a failure to observe that platform liability is not predicated on failure to address the bad acts of third parties, but on platform’s failure to address their own complicity in the use of their proprietary networks to facilitate and participate in, including through monetization, the propagation of harms. They are not disinterested third parties analogous to someone stopping by to help someone in need, or to step in to address a crime in the absence of a relationship that has established a duty of care. A company taking reasonable actions to address its supply chain to prevent foreseeable harms is not being a Good Samaritan — they are doing what anyone would expect of a company in the delivery of its goods or services.

To me, the most shocking moment of the conference was the recitation — in 2020, of the notion that internet platforms are merely reflective of the underlying societies in which they operate. Of course platforms didn’t introduce sin into the Garden of Eden, but to ignore how platforms use surveillance and micro targeting to shape the potential of our lives while monetizing our futures is astounding. Back in 2017 I gave Vint Cerf a hard time when he said something along these lines, expressing my view that no one truly believed that any more, and that was 2017 when we understood vastly less than we do today about how we are manipulated online. And then for participants to add that not only was the internet merely a passive (non-acting) mirror, but that it provided transparency and an opportunity to mediate? Quite instructive in illustrating how far we have to go to establish reform.

Finally, those raising the prospect of the “moderators dilemna” suggested that it would fundamentally change the internet as we know it. I find this partially sad, and partially amusing. It misses the central point that changing the internet as we know it is not an unintended consequence — it is the very object of this exercise. It is part of a reexamination of internet governance rules to promote a different kind of discourse, and to establish a better understanding of the importance of reflecting our interdependence through the establishment of a more accountable ecosystem. And, at least for me — and as noted by Professor Franks, the idea is not to bring notice and takedown into the CDA. It’s already failing in the DMCA where it has been a feature since 1998. The goal here is to create normal rules of liability so that a duty of care to operate in a reasonable manner changes the very design and operation of platforms. Defenders of the status quo want to make this about moderation — but moderation is after-the-fact and designed to fail. We are hopefully on the cusp of something much more fundamental and empowering.