Where does online hate speech start: at the bottom of society or at the top?
Once again, the debate turns to banning websites that promote hate speech: in 2017 it was the neo-Nazi Daily Stormer and now it’s 8chan, created by Fredrick Brennan in October 2013 as “a free-speech-friendly” alternative to 4chan, and now under scrutiny for its central role in the radicalization and publication of information related to recent mass shootings in the United States and other countries. After disassociating himself from the page in December 2018 after the Christchurch massacre and the killer’s publication on it of videos of the shooting, Brennan said it should be closed and that “a lot of these sites cause more misery than anything else”.
After verifying that the author of the massacre in El Paso over the weekend had announced his intentions on 8chan before killing 20 people, the site is now inaccessible, after Cloudflare, which provides DNS services and DDoS protection for a number of other controversial sites, decided to discontinue its services to the page. The page has long been excluded from the Google index after it was found to have published child pornography and although it maintains its domain because its registrar, Tucows, will not withdraw it until it receives a judicial request to do so, everything indicates that it will soon disappear to most intents and purposes.
The question is the same as in 2017: should these types of internet sites be banned? There are a number of considerations to take into account: the first is of a practical nature: what purpose would a ban serve? The Daily Stormer experience suggests not much: the page continues to be active under a different domain name and in addition now brags about being “the most censored publication in history”, increasing its appeal for its brainless readers. The site is no longer a problem for Cloudflare, but it still exists, and its existence remains a problem for the internet and everyone. The internet is very large and has many recesses and places to hide thanks to domain providers of all kinds and of course, the dark web. If a group of like-minded people wish to discuss a particular topic, they have pretty much unlimited resources to do so and pursuing them usually tends to marginalize and radicalize them.
Needless to say, something needs to be done: there are limits to what we can say, particularly in the case of conspiracy to murder or commit acts of violence. As Matthew Prince, CEO of Cloudflare commented yesterday, after reversing his initial decision keep the page open on the grounds of freedom of expression: “we’ve solved our own problem, but we haven’t solved the Internet’s.”
Andrew Torba, CEO of Gab, another neo-Nazi and white supremacist page with more than one million participants, says “the problem is not 8chan, but young Americans,” adding: “If 8chan disappears, someone else will set up a new image board, let’s call it 20chan or whatever, and people will flock there.”
Writing on Cloudfare’s corporate blog, Prince states that “8chan has repeatedly proven itself to be a cesspool of hate,” adding: “they have proven themselves to be lawless and that lawlessness has caused multiple tragic deaths” (…) “While removing 8chan from our network takes heat off of us, we need to have a broader conversation about addressing the root causes of hate online.”
In short, the problem is not the web, it’s people. Services such as Google, Cloudflare or other domain providers can close down a site if they believe it is appropriate or that it harms their image, and the fact that they can do so is in itself a cause for concern if this is done arbitrarily rather than based on a judicial decision. Eliminating pages where people of a certain ideology meet is dangerous, because far from eliminating the activity as such, it simply hides it and subjects it to further radicalization. We may find certain opinions deeply offensive, but it is important to understand that the medium used to express them is not responsible, instead it is people who are responsible, and people do not disappear because this or that web site is denied access to the internet. At the same time, giving racist websites public visibility provides a sounding board and over time can become attractive to a greater audience.There are no easy answers and addressing this problem means addressing the causes, not just the symptoms. Mass shootings and hate crimes are not an internet problem, nor something that the internet encourages. Instead, one of the deeper causes is to allow and encourage a certain climate of opinion, attracting in turn followers who feel they can contribute to that cause and who feel they are supported by public figures. That is the problem, and not websites. We can — and possibly should — adopt a zero tolerance policy about this kind of online content, but as long as we continue with a cat and mouse game that makes this hatemongers feel superior every time their activity reappears under a new domain, a new site or a new service, we are merely avoiding tackling the problem at root.
The paradox of tolerance tells us not to tolerate the intolerant. But what happens when that intolerance and hate speech is encouraged by a nation’s president?
(En español, aquí)