Cloudflare drops 8Chan. How can we make these decisions more legitimate?
This week, Cloudflare announced that it would no longer help to host 8Chan. The decision came after news that the El Paso terrorist was inspired and supported by the site’s hateful culture — as was the shooter in the Christchurch massacre earlier this year.

Cloudflare is an infrastructure provider: it helps speed up websites and protects them from malicious attacks. This decision is notable because infrastructure providers have not historically been publicly visible in policing hate speech. Cloudflare and others made headlines in 2017 when it and several other companies revoked their services for The Daily Stormer, a neo-Nazi forum that played a large role in supporting the murder of Heather Heyer in Charlottesville.
Writing on Cloudflare’s blog, CEO Matthew Prince explains the decision in depth, noting that both 8Chan and the Daily Stormer were “lawless platforms” that were designed to be unmoderated and proven to result in real harm.
“8chan has repeatedly proven itself to be a cesspool of hate.” — Matthew Prince
The decision to drop 8Chan is an incredibly complex one. There is good reason to be concerned about censorship at the infrastructure level — it’s always a blunt approach, and there are real risks to freedom of expression. But Cloudflare’s decision recognizes that in some circumstances, only infrastructure providers are in a position to effectively tackle websites that deliberately support hateful cultures.
Cloudflare’s decision to refuse to continue to host 8Chan doesn’t fully shut down the forum, but it does make its existence much more difficult. Perhaps more importantly, as Tarleton Gillespie points out, it sends a clear message about the types of speech that society will not tolerate. It’s a signal that mainstream companies are no longer willing to provide support for sites that foster hateful ideologies and work to radicalize people into extremist violence.
Cloudflare isn’t a neutral provider —its decisions to provide or refuse support for websites always have a political impact. Content distribution networks, hosts, social media platforms, search engines, and other internet and telecommunications companies have always had the power to influence who is able to reach large audiences online. Now, looking back at how toxic cultures have festered and grown on both mainstream and fringe parts of the internet, we are finally having a conversation about how they should exercise that power.
The ‘rule of law’: making good decisions about content
Cloudflare’s CEO, Matthew Prince, is clearly deeply uncomfortable with the power he has to decide what sites get the benefits of Cloudflare’s massive content distribution network. Prince is thinking hard about how to make these types of decisions in a way that is legitimate —which means according to the principles of the ‘rule of law’.
The rule of law is a set of ideals that has long been used to determine whether rules are fairly made and enforced. It’s the opposite of arbitrary discretion. It requires that the rules are clear and well-known, that they represent some justifiable vision of the common good, and that decisions are made in a fair and accountable way.
I’ve written a lot about how rule of law values can be useful when translated to the powers of internet companies to make decisions about what content and whose voices they amplify and suppress online. These are values of good governance, and they can help us to hold internet intermediaries to account for the way that they govern our information environments and online social spaces.
Prince explains that Cloudflare works hard to be guided by rule of law principles when they make decisions like this one. Over the last two years, he’s done a lot to start an explicit discussion about when and how different types of companies should step in to moderate hateful speech. But there’s still a long way to go. In an article in the Atlantic, Evelyn Douek explains that in the two years since Cloudflare made its decision to stop providing services for the Daily Stormer, the tone of the debate has changed, but Cloudflare and other providers still don’t have a useful framework for making decisions to withdraw their support for hateful sites.
Both Douek and Prince worry about how decisions like this ought to be made in the future.
“decisions like the one to stop hosting 8chan may look like efforts to tame the wild west of the internet, but without any safeguards in place they remain lawless” — Evelyn Douek
So what next? How can internet companies make decisions about the content they host in a way that is more legitimate? I think there’s two main answers:
Procedurally — through rule of law principles: First, the rules have to be clear, fairly and transparently enforced, with adequate opportunities for due process. These are the minimum safeguards that form the basis of civil society declarations like the Manila Principles and the Santa Clara Principles On Transparency and Accountability in Content Moderation.
Substantively — through human rights principles: Second, the rules have to be justifiable. For too long, the tech industry has talked about its rules in terms of ‘neutrality’; now that people clearly expect tech companies to do more to police speech, we still don’t have the language to articulate the difficult trade-offs that need to be worked through in setting workable policies. Human rights principles provide this language.
As David Kaye points out, it’s increasingly accepted that companies have responsibilities under human rights law, and human rights law has both the language and the institutional support to help companies develop and articulate policies that are carefully tailored to achieve their desired outcomes while minimizing harmful side-effects.
Companies like Cloudflare are going to be increasingly asked to intervene in the fight against hate, radicalization, and extremism. These issues are not going away. Some of this might be dealt with by new laws in some countries, but private companies will still have responsibilities to determine the standards they support on their own networks. The international human rights framework doesn’t dictate a specific answer on any of these complex questions, but the language it provides is really important to working through the issues that are really at stake.
The risk for internet companies is that they lose the trust of their users and of the public when they make difficult decisions. The risk for all of us is that important decisions about what we can see and say online are made by companies that are not accountable, in ways we cannot fully know or challenge. For us to trust internet companies making these decisions, we need a much more advanced public discussion about what the rules should be, as well as real guarantees of due process. Real engagement with both rule of law principles and international human rights seem like the best tools we have to work through the tough questions about the responsibility of tech companies for addressing harmful speech and cultures online.
[ My new book, Lawless: the secret rules that govern our digital lives, is now out with Cambridge University Press. In it, I argue that tech companies should develop a new digital constitutionalism — real limits on how they wield their power over our shared social spaces. You can buy it on Amazon, or get the free full draft in PDF form here.]