Are There Limits to Online Free Speech ?

When technologists defend free speech above all other values, they play directly into the hands of white nationalists.

alicetiara
Data & Society: Points
16 min readJan 5, 2017

--

In November 2016, Twitter shut down the accounts of numerous alt-right leaders and white nationalists. Richard Spencer, the head of the National Policy Institute and a vocal neo-Nazi, told the LA Times it was a violation of his free speech. “[Twitter needs] to issue some kind of apology and make it clear they are not going to crack down on viewpoints. Are they going to now ban Donald Trump’s account?”

Old and new media organizations are scrambling to define acceptable speech in the era of President Trump. But Twitter is in a particularly poor position. The prevalence of hateful speech and harassment on the platform scared off potential acquisitions by both Disney and Salesforce. The company has dealt with one PR disaster after another, from Ghostbusters star Leslie Jones temporarily leaving the platform after being harassed and doxed, to a viral video of obscene and abusive tweets sent to female sports journalists, to pro-Trump accounts sending Newsweek reporter Kurt Eichenwald animated gifs designed to induce epileptic seizures. A site once touted as “the free speech wing of the free speech party” is now best known for giving a voice to Donald Trump and #gamergaters.

At the same time, attempts by Twitter and sites with similar histories of free speech protections to regulate the more offensive content on their site have been met with furious accusations of censorship and pandering to political correctness. This enables the alt-right to position themselves as victims, and left-wing SJWs (“social justice warriors”) as aggressors. Never mind that private companies can establish whatever content restrictions they wish, and that virtually all these companies already have such guidelines on the books, even if they are weakly enforced. When technology companies appear to abandon their long-standing commitment to the First Amendment due to the concerns of journalists, feminists, or activists, the protests of those banned or regulated can seem sympathetic.

How did we get to the point where Twitter eggs spewing anti-Semitic insults are seen as defenders of free speech? To answer this question, we have to delve into why sites like Reddit and Twitter have historically been fiercely committed to freedom of speech. There are three reasons:

  1. The roots of American tech in the hacker ethic and the ethos that “information wants to be free”
  2. CDA 230 and the belief that the internet is the last best hope for free expression
  3. A belief in self-regulation and a strong antipathy to government regulation of the internet

But a commitment to freedom of speech above all else presumes an idealistic version of the internet that no longer exists. And as long as we consider any content moderation to be censorship, minority voices will continue to be drowned out by their aggressive majority counterparts.

To better understand this, we need to start with the origin story of the modern internet. Like many technology stories, it takes place in Northern California.

The Secret Hippie Hacker Past of the Internet

The American internet was birthed from a counter-culture devoted to freedom, experimentation, transparency and openness. While the internet originates with the military — ARPANET was commissioned and funded by the Department of Defense — the early hardware and applications that helped technology to thrive were mostly created by academics, geeks, hackers and enthusiasts.

For instance, in post-hippie Berkeley, early microcomputer aficionados formed the Homebrew Computer Club, freely sharing information that enabled its members to create some of the first personal computers. When Steve Wozniak and Steve Jobs built the first Apple Computer, they gave away its schematics at the Club. (Woz regularly helped his friends build their own Apples.) In the 1980s, people at elite universities and research labs built on ARPANET’s infrastructure to create mailing lists, chat rooms, discussion groups, adventure games, and many other textual ancestors of today’s social media. These were all distributed widely, and for free.

Computer created by hippie geeks. | CC BY-SA 2.0-licensed photo by Ed Uthman.

Today, it boggles the mind that people would give away such valuable intellectual property. But the members of this early computing culture adhered to a loose collection of principles that journalist Steven Levy dubbed “the hacker ethic”:

As I talked to these digital explorers, ranging from those who tamed multimillion-dollar machines in the 1950s to contemporary young wizards who mastered computers in their suburban bedrooms, I found a common element, a common philosophy that seemed tied to the elegantly flowing logic of the computer itself. It was a philosophy of sharing, openness, decentralization, and getting your hands on machines at any cost to improve the machines and to improve the world. This Hacker Ethic is their gift to us: something with value even to those of us with no interest at all in computers.

Early technology innovators deeply believed in these values of “sharing, openness, and decentralization.” The Homebrew Computer Club’s motto was “give to help others.” Hackers believed that barriers to improving technology, contributing to knowledge, and innovating should be eliminated. Information should instead be free so that people could improve existing systems and develop new ones. If everyone adhered to the hacker ethic and contributed to their community, they would all benefit from the contributions of others.

Now, obviously, these ideals only work if everyone adheres to them. It’s easy to take advantage of other people’s work — economists call this the “free rider problem.” And it doesn’t take into account people who aren’t just lazy or selfish, but people who deliberately want to cause harm to others.

But these beliefs were built into the very infrastructure of the internet. And they worked, for a time. But regulation was always necessary.

Regulating the Early Internet

On April 12, 1994, a law firm called Canter and Siegel, known as the “Green Card Lawyers,” sent the first commercial spam e-mail to 6,000 USENET groups advertising their immigration law services. This inspired virulent hatred. Internet users organized a boycott, jammed the firm’s fax, e-mail, and phone lines and set an autodialer to call the lawyers’ home 40 times a day. Canter and Siegel were kicked off three ISP’s before finally finding a home and publishing the early e-marketing book How to Make a Fortune on the Information Superhighway. Despite these dubious successes, the offense was seen as so inappropriate that Canter was finally disbarred in 1997, partially due to the e-mail campaign; William W. Hunt III of the Tennessee Board of Professional Responsibility said, “We disbarred him and gave him a one-year sentence just to emphasize that his e-mail campaign was a particularly egregious offense.”

Early internet adopters were highly educated and relatively young with above average incomes, but, more importantly, many of them were deeply invested in the anti-commercial nature of the emerging internet and the “information wants to be free” hacker ethos. Any attempted use of the network for commercial gain was highly discouraged, particularly uses that violated “netiquette,” the social mores of the internet. Netiquette was a set of community-determined guidelines that were enforced though both norms (people explicitly calling each other out when they violated community standards) and technical means (software that allowed users to block other users). Most USENET groups had lengthy Frequently Asked Questions documents where they spelled out explicitly what was encouraged, tolerated, and disallowed. And users who broke these rules were often sharply reprimanded.

The extent of the backlash against Canter and Siegel spam shows not only how egregious a violation of netiquette their messages were, but that their actions threatened the very utility of USENET. If the newsgroups were cluttered with spam, useful messages would be drowned out, interesting discussion would end, and key members would leave.

Fast forward a few years and email spam had taken over the inbox. Many internet users used dial-up connections, and resented having to pay to download useless messages about Rolexes and Viagra. By the mid-aughts, email, long a backbone of online communication, had become less useful. So technology companies and computer scientists worked together to develop sophisticated email filters. They don’t work all the time, but people who use commercial email services like Gmail or Hotmail rarely see a spam message in their inbox. The problem was solved technically.

In both of these situations, there was no argument that technical and normative ways to circumvent spam violated the free speech rights of spammers. Instead, internet users recognized that the value of their platforms was rooted in their ability to foster communication, and spam was a serious threat. It’s also worth noting that these problems were solved not through government regulation, but through collective action.

But today, we face a different set of problems related to free speech online. On one hand, we have aggressive harassment, often organized by particular online communities. On the other, we have platforms that are providing spaces for people with unarguably deplorable values (like neo-Nazis) to congregate. And this is particularly true for sites like Twitter and Reddit, which prioritize freedom of expression over virtually all other values.

Free Speech and a Free Internet

In 1997, the Supreme Court ruled in the landmark Reno v. ACLU case that internet speech deserved the same free speech protections as spoken or written speech. Justice John Paul Stevens wrote in the majority opinion that the internet’s capacity to allow individuals to reach (potentially) mass audiences, made it, perhaps, even more valuable than its broadcast equivalent:

Through the use of chat rooms, any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of Web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer.

The implication is that it was even more important to protect free speech online than offline because of the internet’s wide accessibility. While few people could publish in the New York Times or air their views on 60 Minutes, almost anyone could post their ideas online and make them immediately accessible to millions. Stevens, and many technologists, imagined that the internet would be a powerful check on entrenched interests, especially given the deregulation and consolidation of corporate media begun by Reagan and solidified by Clinton.

Such ideals meshed perfectly with the hacker ethic. Rather than corporations or governments having proprietary access to ideas and information, the internet would break down such barriers. These are the ideals behind Wikipedia — “a free encyclopedia that anyone can edit,” and Wikileaks — “we open governments.” Protecting internet speech became a primary value of technology communities. Organizations like the ACLU and the EFF dedicated themselves to fighting any encroachment on internet free speech, from over-zealous copyright claims to the jailing of political bloggers.

This was furthered by CDA 230: the so-called “safe harbor” provision of the Digital Millennium Copyright Act. CDA 230 holds that “online intermediaries” — originally ISPs, but now including social media platforms — aren’t responsible for the content that their users produce. If I write something libelous about you on Facebook, you can’t sue Facebook for it. If someone writes a horrible comment on a blog I write, that’s not my problem. Basically, CDA 230 enabled user-contributed content (aka social media) to exist. YouTube doesn’t have to review a zillion hours of content before it’s posted; it doesn’t have to censor unpleasant opinions. As a result, CDA 230 is beloved by the tech community and free-speech advocates. The EFF calls it “ one of the most valuable tools for protecting freedom of expression and innovation on the Internet.”

Now, free speech and progressive ideas have always co-existed uneasily. The ACLU has been attacked from both the left and the right for defending the American Nazi party’s right to march in Skokie, Illinois. In Margaret Atwood’s The Handmaid’s Tale, it’s an unholy alliance between anti-pornography feminists and anti-pornography fundamentalist Christians that leads to the creation of an explicitly patriarchal state. But today, for both liberals and libertarians, the solution to bad speech is more speech. Rather than banning, for instance, racist speech, most First Amendment advocates believe that we should expose its inaccuracy and inconsistencies and combat it through education. (Lawyers call this “the counterspeech doctrine.”)

Sophisticated interpretation of the Counterspeech Doctrine. | CC BY-NC-ND 2.0-licensed photo by mpancha.

For the most part, this makes sense. Usually, when the government does attempt to regulate internet speech, we end up with poorly conceived legislation. The EFF found that across the Middle East, laws that attempt to shut down terrorist recruiting usually end up being strategically applied to commentary and expression that doesn’t favor the government. And in the US, the Digital Millennium Copyright Act prevents virtually all internet users from using copyrighted content of any kind. A young activist could post an intricate, creative, political video on YouTube — typically the type of speech that’s highly protected — and it would automatically be taken down if it used a copyrighted song. Few of us want people who refer to the internet as a “series of tubes” or “the cyber” making decisions about how the rest of us should use it.

If Not the Government, Then Who?

The problem is that many tech entrepreneurs are still guided by utopian views of the early internet and create products that presume that people are good actors, ignoring considerable evidence to the contrary. The strong antipathy to government regulation and the legal precedent set up by CDA 230 mean that tech companies rely on self-regulation, and when this fails, they are often left scrambling.

Image from an ad-filled content farm called “Quotesgram”, click at your own risk

Let’s take Reddit. Originally a community for geeks to upvote geeky things, Reddit’s current reputation has been tarnished by communities devoted to the alt-right, men’s rights advocacy, and illicit photos of underage girls wearing yoga pants. In 2014, Reddit was heavily criticized for hosting the Fappening, a subreddit devoted to organizing and discussing stolen nude photos of female celebrities. Then-CEO Yishan Wong wrote a blog post called “Every Man is Responsible for His Own Soul” where he defended Reddit’s choice to continue hosting the subreddit. Wong claimed that Reddit would not use technical means, like banning users or deleting subreddits, to shut down unpleasant content. Instead, they planned to highlight good actors on the site, like Reddit’s popular Secret Santa. (Confusingly, later that day Reddit deleted the subreddit anyway. Pressure and DMCA requests from deep-pocketed celebrity lawyers were apparently enough to outweigh such lofty ideals.) He wrote:

The reason is because we consider ourselves not just a company running a website where one can post links and discuss them, but the government of a new type of community. The role and responsibility of a government differs from that of a private corporation, in that it exercises restraint in the usage of its powers.

Well, that’s all well and good, but Reddit is not a government. It is a corporation. In the US, the right to free speech applies only when the government attempts to limit what people say, not when private citizens critique media or when websites limit what words people can use in comments. For instance, if Congress passed a bill banning negative comments on Reddit about the President, that would be a legitimate threat to free speech and would be unconstitutional.

But if Twitter decides to ban neo-Nazis or terrorist propaganda, that’s perfectly within their right. Sites like Facebook and Instagram aggressively moderate content, which cuts down on the types of organized brigading that happens on Twitter and Reddit. But of course, free speech isn’t just about what’s legal, but about upholding values that are expressed in many places in society.

Companies like Twitter and Reddit that have stayed true to hacker ideals of information as free, and of the internet as a haven of free speech, continue to struggle with this balance. ISIS has been extremely effective at using digital media to spread propaganda. The same tools that let people collaborate on awesome projects like Wikipedia also let them collaborate on crazy theories like Pizzagate. Given Reddit’s upvote/downvote infrastructure, it’s hardly surprising that a community devoted to naked pictures of hot famous chicks became the fastest-growing subreddit of all time, regardless of how those pictures were obtained. And Twitter’s feature set is fantastic for people to organize and mobilize quickly, even when those people are virulent anti-Semites.

The internet was explicitly founded on idealism. Even though most people are good actors, there are several very vocal minorities who want to use the internet for various Bad Things. Now, this wouldn’t really matter if it was just a matter of counterspeech. If we could end sexism just by pointing out its flaws, then we’d all be in debt to Gender Studies majors. But the type of organized brigading that contemporary social media affords has the intended consequence of deterring other people’s speech — specifically, the speech of women, especially queer women and women of color. And it gives rise to organized movements that want to diminish community trust and belief in institutions. This has very real and very negative consequences.

What do we do when tools founded on openness and freedom are used by straight-up bad actors? And what do platforms committed to those ideals do when their technologies are used to harass and suppress others?

Who Brought the Alt-Right Into This?

When technologists defend free speech above all other values, they play directly into the hands of white nationalists.

The rise of the alt-right, a fusion of white nationalists, Russian trolls, meme enthusiasts, men’s rights activists, #gamergaters, libertarians, conspiracy theorists, bored teenagers and hardcore right-wing activists has been well-documented by others. Suffice to say that the alt-right has been extraordinarily effective at using digital technologies, from Reddit to 8chan to Twitter to Google Docs, to collaborate, mobilize, and organize. They’ve also been very effective at co-opting the language of left-wing activism to paint themselves as victims. And they’ve done this through claiming the value of free speech.

Photo by Sean Barger @waitingfortheman on Instagram

To Milo Yiannopolous and his army of Breitbart commentators, safe spaces, inclusive language, and “political correctness” are not attempts to right wrongs, but incursions on free speech. Sexism and racism are lies that feminists, “social justice warriors,” and others have come up with in order to suppress the truth (or insert your favorite conspiracy theory here). If feminist criticism of sexist imagery in video games functions as censorship, then people who enjoy such games can position themselves as the victims of Big Brother. Not only that, but it allows them to portray feminists as weaklings who can’t handle the harsh realities of everyday life, and need to be coddled and handled carefully — which diminishes very real concerns.

They’ve already been extremely successful at positioning college campuses as the worst violators of free speech. Both Yiannopolis and Richard Spencer have garnered great publicity by booking invited talks at college campuses and then delighting in the uproar that typically follows. Campus anti-hate-speech policies have long been targets of the right; add to that anti-bullying and anti-harassment campaigns and you have an environment where Nina Burleigh writes in Newsweek, hardly a bastion of right-wing thought, that “American college campuses are starting to resemble George Orwell’s Oceania with its Thought Police, or East Germany under the Stasi.” (As someone who works in higher ed, this could not be further from the truth.) The idea that college campuses regularly censor and violate the free speech rights of people who aren’t politically correct has become a mainstay of think pieces and Twitter, to my dismay. It’s also given rise to the Professor Watchlist, a directory of “college professors who discriminate against conservative students and advance leftist propaganda in the classroom.” (Leftist propaganda, in this case, indicates any anti-capitalist tendencies or acknowledgement of white privilege.)

Truthy Meme by the Federalist Papers

So when tech companies like Reddit and Twitter, who have always been strong supporters of internet free speech, begin carefully moderating content, the alt-right sees it as full-scale censorship. Ironically, they co-opt the language of the left to portray their critics as aggressive SJWs, and themselves as powerless victims. Content moderation by private technology companies is not a First Amendment violation; in most cases, it’s just a matter of enforcing pre-existing Terms of Service. But this victim/bully dichotomy allows them to garner sympathy from many who truly believe that the internet should be a stronghold of free speech.

We need to move beyond this simplistic binary of free speech / censorship online. That is just as true for libertarian-leaning technologists as it is neo-Nazi provocateurs. Sometimes the best way to ensure diverse voices is to make it safe for people to speak up who’d otherwise feel afraid. In his studies of Wikipedia, Northeastern Communication professor Joseph Reagle found that the classic liberal values of the internet — openness, transparency, and freedom — prioritize the voices of combative or openly biased community members over the comfort of female members, leading to male domination even in high-minded online communities. Aggressive online speech, whether practiced in the profanity and pornography-laced environment of 4Chan or the loftier venues of newspaper comments sections, positions sexism, racism, and anti-Semitism (and so forth) as issues of freedom of expression rather than structural oppression.

Perhaps we might want to look at countries like Canada and the United Kingdom, which take a different approach to free speech than does the United States. These countries recognize that unlimited free speech can lead to aggression and other tactics which end up silencing the speech of minorities — in other words, the tyranny of the majority. Creating online communities where all groups can speak may mean scaling back on some of the idealism of the early internet in favor of pragmatism. But recognizing this complexity is an absolutely necessary first step.

Thanks to harryh for editing and Lindsay Blackwell & Whitney Phillips for inspiring some of the thoughts behind this piece.

Alice E. Marwick is former Director of the McGannon Communication Research Center and Assistant Professor of Communication and Media Studies at Fordham University. She is a 2016–2017 fellow at Data & Society.

Points/spheres: In “Are There Limits to Online Free Speech?” Alice Marwick argues against simplistic binaries pitting free speech against censorship, looking at how the tech industry’s historic commitment to freedom of speech falls short in the face of organized harassment. This piece is part of a batch of new additions to an ongoing Points series on media, accountability, and the public sphere. See also:

--

--

alicetiara
Data & Society: Points

née Marwick. I study social media and society. Fellow, Data & Society; formerly at the McGannon Center, Fordham University, Microsoft Research.