8Chan: Between Freedom and Security

Cloudflare cuts services for Nazi recruiting-ground 8chan

Jackson Oliver Webster
Wonk Bridge
5 min readAug 8, 2019

--

On 3 August, a young white man carrying a Romanian-built semi-automatic rifle entered a Walmart in El Paso, Texas, killing 22 people, mostly of hispanic descent. Before the rampage, the shooter had posted a lengthy “manifesto” on the site 8chan.

Mourners in El Paso / Mark Ralston for Agence France-Presse

The site had also been used to post a “manifesto” by the white power terrorist responsible for the Christchurch mosque shooting in New Zealand earlier this year, as well as by the antisemitic terrorist from the Poway synagogue shooting outside San Diego, California. The 8chan posts of all three shooters drew heavily on tropes from online hate communities, including alt-right dog whistles and paranoia that have found their way into our mainstream politics.

8chan’s homepage advertising its permissive approach to content management

8chan got its start after 4chan, a popular gaming forum, started banning users exhibiting violently misogynist attitudes during the GammerGate scandal in 2014. The bans caused users to abandon the forum en masse for the similarly-designed but completely unmoderated 8chan. Unsurprisingly, 8chan quickly became a haven for the most reprehensible and bizarre corners of the Internet, from radical misogynists to nazis to conspiracy theories like QAnon. This probably should have been obvious upfront from 8chan’s slogan displayed on its homepage, branding itself as the “Darkest Reaches of the Internet”.

Following the El Paso shooting last weekend, anti-DDoS infrastructure provider Cloudflare announced on Monday that it was terminating service for 8chan, as it had done before for the Daily Stormer, a neo-nazi website popular with members of the Ku Klux Klan. Cloudflare CEO Matthew Prince called 8chan a “cesspool of hate” and “uniquely lawless”, continuing that this “lawlessness has contributed to multiple horrific tragedies”.

The manifestos on 8chan and Cloudflare’s decision bring up broader questions of the limits of acceptable online speech. It’s clear that the reach of posts on online forums makes our current free speech debate different from those of the past. In particular, the government is no longer the arbiter of the public square, instead having ceded that responsibility to tech platforms, large and small. While the debates of the twentieth century revolved around protecting individuals’ political speech from government persecution and control, today’s debate centers on private actors, and has the added complications of network effects and the incredible speed of online content.

While free speech is certainly a key tenet of Western democracy, hate speech and incitement to violence are not and have never been considered protected under even the most expansive of free speech rights. This is largely beside the point, as private companies are not bound by the First Amendment, which states that “Congress shall make no law” restricting the freedom of speech. By refusing to house repugnant, fascist content, a platform is not shutting down speech entirely, Cloudflare is simply refusing to allow its tool to be used as a way to spread hateful messages.

This does not mean Cloudflare has become, in Prince’s words, “an arbiter of free speech”, rather it is an editorial decision by a private company to control the use of their own products. The New York Times doesn’t publish every single letter to the editor it receives, rather its editorial team has the right to filter content and present the reader with content it deems worthy.

Platforms and providers have every right, and even the responsibility, to shape the nature of the nature of content on their site to match their goals and values. Newspapers and magazines have always done so through the editorial process, and while platforms have an imperative to be transparent about their approach to content, they are not obliged to act as a free and open public square.

Having the right to say something and being able to broadcast it to millions of people are not the same thing.

Free speech hardliners, particularly those on the extremes of the political spectrum, often argue that “free speech” dictates the right for anyone to say anything, anywhere, at anytime. In their argument, the only options are complete libertarianism, or Orwellian restrictions not only on speech but also on thought. This argument commits a basic logical fallacy when applied to online spaces. Even if you take the strictest interpretation of freedom of speech, the principle dictates the freedom to express any given view, not the absolute right to have it shared online on a platform. Having the right to say something and being able to broadcast it to millions of people are not the same thing.

Completely unmoderated spaces where hate speech is allowed to fester and be broadcast are simply not necessary for the maintenance of truly free expression.

Prominent tech journalist Kara Swisher chastised Cloudflare and other tech giants, taking Prince’s line that “enough is enough” and adding that “enough was enough a very, very long time ago, and being dragged kicking and screaming into taking a stand and admitting that there are real-world implications of the online tools you have built is not brave in any way.”

Swisher’s argument is essentially that consumers and society must demand better performance from our Big Tech demigods. Not “breaking the letter of the law” in its strictest sense is simply not a high enough bar for the largest, most profitable and influential companies the world has ever known. We, as citizens of our new digital reality, should be demanding social media that doesn’t sell our data, search engines that don’t spy on us, payment apps that don’t track us, and smartphones designed to be secure.

We also should be demanding that service providers and platforms hold grotesque communities responsible for the consequences of their actions. Completely unmoderated spaces where hate speech is allowed to fester and be broadcast are simply not necessary for the maintenance of truly free expression.

Reddit — famously lax in its approach to moderation — has beefed up its content policies and started kicking communities off the platform. In the face of a huge PR crisis of its own making, Reddit removed literal actual Nazis and other race-baiting and discriminatory content from its platform. Yay.

Many argue Reddit’s moves were too little, too late, even if they represent a step in the right direction. Others have said that despite ridding itself of actual Nazis, Reddit is still home to plenty of unsavoury content , including the notoriously racist and sexist community r/The_Donald. Reddit’s CEO has pretty blatantly admitted that this community has not been shut down mainly because its namesake occupies the highest office in the land. How comforting.

Navigating the boundaries of free speech in a world dominated by social media platforms was never going to be a simple task. For all their utopian rhetoric about the power of connecting people, tech platforms are perhaps finally coming to the realization that human nature is complicated, and that there’s no silver bullet algorithmic fix to violence, hatred, racism, and sexism.

Editor’s Note — For the public’s information, this article’s title was changed after first publishing, from “Cesspool of Hate” to its present iteration, in order to better represent the arguments found within.

Jackson Webster is a Paris-based security expert and tech policy wonk. He writes about cybersecurity, politics, and Big Tech. Follow him on Twitter @joliverwebster.

--

--

Jackson Oliver Webster
Wonk Bridge

Sometimes I write about politics and tech // JFK / LAX / CDG