WTF is CDA: an interactive tool about one of the most powerful laws shaping the internet

--

WTF is CDA is a project of the 2019–2020 Assembly Student Fellowship at the Berkman Klein Center at Harvard University. One of three tracks in the Assembly Program, the Assembly Student Fellowship convenes Harvard students from across a variety of schools and disciplines to tackle the spread and consumption of disinformation. Assembly Student Fellows conducted their work independently with light advisory guidance from program advisors and staff.

WTF is CDA was developed by Sam Clay (GSD ’20), Jess Eng (College ’21), Sahar Kazranian (HDS ’20, HKS ’20), Sanjana Parikh (Berkeley Law ’20), and Madeline Salinas (HLS ’20).

Design Credits to Jenny Fan

Senator Josh Hawley wants to end the tech industry’s “sweetheart deal” with Section 230 of the Communications Decency Act (CDA 230) by passing legislation that expands publisher liability. President Trump recently signed an Executive Order taking aim at CDA 230 and followed up with a series of tweets, including: “REVOKE 230.” Meanwhile, advocates at the Electronic Frontier Foundation fear that changing CDA 230 will strengthen big tech’s power over online speech. But what exactly is CDA 230, and what does it have to do with the spread of misinformation and disinformation on the internet today?

WTF is CDA is an interactive tool about CDA 230, one of the most powerful laws shaping the Internet as we know it and the subject of rampant misunderstanding and polarized debates. The project aims to stimulate informed discussion and deliberation about CDA 230. To do this, we created a website that offers a short quiz alongside a digestible history of the law’s genesis and subsequent judicial interpretations. By answering a series of questions on issues such as voter suppression, defamation, encryption, and health misinformation, individuals can interrogate their assumptions about what content should and should not be allowed on Internet platforms.

The project was created by Harvard students of law, public policy, divinity, design engineering, and folklore and mythology during the 2020 Assembly Student Fellowship, part of the broader Assembly program organized by the Berkman Klein Center for Internet & Society.

Section 230 of the Communications Decency Act (CDA 230) has been referred to as “the twenty-six words that created the internet” by cybersecurity law expert Jeff Kosseff. Credit: New York Times

Section 230 of the Communications Decency Act (CDA 230) has been referred to as “the twenty-six words that created the Internet” by cybersecurity law expert Jeff Kosseff. Since its enactment in 1996, these few but powerful words have shaped user behavior and freedom of speech on platforms in enormous ways. However, there remains a lack of awareness about CDA 230 and its impact on misinformation and disinformation on popular social media platforms. A review of the spirit in which CDA 230 was enacted is an important starting point of our project.

Prior to CDA 230, Internet platforms faced a difficult tradeoff when deciding whether to remove questionable content posted by users. If they decided to remove undesirable user content, the law viewed the website as a “publisher,” which meant they could be sued for the same reasons any publisher could be sued (e.g. defamation, misrepresentation, etc.). The drafters of CDA 230 thought that making platforms publishers created the wrong incentives because it discouraged platforms from cleaning up their websites.

A law that helped develop the Internet as we know it and gave birth to our big tech platforms was intended to allow websites to moderate content in good faith and without fear of litigation

CDA 230 intended to realign incentives by immunizing platforms from lawsuits even if they voluntarily took actions to remove objectionable content or if posts remained up that they did not address for whatever reason. The law offered this protection to “Good Samaritans,” platforms that screened and removed objectionable content in good faith.

Misinformation and disinformation have long been a regular feature of the Internet. But their prevalence and impact on society and online discourse acquired a particular urgency in 2016. Since then, the question of platform responsibility in content moderation has been the subject of many polarizing debates.

Despite the brevity of CDA 230, there remain questions about platform accountability in specific cases where an individual experienced harm or a public authority was unable to enforce the law. Has CDA 230 gone too far by providing platform immunity in exchange for “good faith” voluntary efforts?

Upon completing the quiz, individuals are assigned one of three identities that capture their position on CDA 230: Absolutist, Reformer, or Abolitionist. Individuals can redo the quiz to further interrogate their knowledge and assumptions about the law and its application.

CDA 230 shields platforms from being treated as “publishers” or “speakers” of user-provided content. Proponents of CDA 230 argue that platforms would otherwise be sued into bankruptcy if any individual harmed by online content decides to take legal action. Furthermore, these proponents justify their reasoning by pointing to internet exceptionalism: the digital marketplace of ideas and products that we enjoy today would never have emerged and flourished without a law such as CDA 230. Removing CDA 230 immunity, supporters argue, will thus undermine the exercise of freedom of speech and erase progress within the modern digital economy.

On the other hand, opponents claim that CDA 230 immunity should not apply in certain situations. What if a platform prompts users to enter certain information that leads to housing discrimination? What if an individual suffers because a platform fails to take down health misinformation? How can a public authority enforce consumer protection laws on a platform like Amazon that does not engage in anything resembling “speech”? Who should have the power to hold platforms legally responsible for the activity of users on their website — users, the government, both, or neither?

An example case in the WTF is CDA interactive

WTF is CDA walks individuals through hypothetical cases to help them grapple with debates on platform immunity

The WTF is CDA project addresses these aforementioned questions and more. With our detailed quiz and comprehensive history and case law of CDA 230, the project hopes to achieve two objectives:

  1. Allow individuals to experiment with normative sensibilities defining debate around the law, thus highlighting the power that platforms’ moderation practices have on users, and;
  2. Inspire more informed discussions on questions of responsibility and reform in an age of rampant misinformation and disinformation.

Help us spread awareness and informed debates on the 26 words that created the Internet as we know it

Complete the quiz and learn about CDA 230 in greater depth. We especially welcome your feedback on how to make our tool more user-friendly and impactful, as well as how it can fit into a wider set of educational tools aimed at expanding awareness on CDA 230.

WTF is CDA is the result of many iterations and support from others

A final note, on gratitude. We’d like to thank Jenny Fan for designing the WTF is CDA website. We are immensely grateful for the advice and support of Jonathan Zittrain, Danielle Citron, Tejas Narechania, Jeff Kosseff, Oumou Ly, John Bowers, Hilary Ross, Zenzele Best, and the 2020 Assembly Student Fellowship Cohort. We feel incredibly blessed for this learning experience and look forward to thoughtful discussions on how we can build a healthier, safer Internet together.

Visit the team’s website here, and learn more about Assembly: Disinformation at www.bkmla.org.

--

--

Assembly at the Berkman Klein Center

Assembly @BKCHarvard brings together students, technology professionals, and experts drawn to explore disinformation in the digital public sphere.