Beyond Charlottesville: Cloudflare’s Inconvenient Truth

I am continually inspired by the resilience and helpfulness of my fellow Internet operators. In times of need — be it a security issue, natural disaster, or humanitarian purposes — diverse organizations come together to help. This is all done without the presence of a formal governing body, rules and regulations, or any expectation of compensation beyond the Golden Rule. As tragedy struck in Charlottesville, the response was no different: within hours, such industry leaders as GoDaddy and Google mobilized to disable service to the Daily Stormer, a web site popular for promoting hate speech, harassment, and most recently shaming the massacre’s victim in the most grotesque manner possible.

Origins and Carriage

Most professional hosting and CDN providers have staff dedicated to handling complaints of abusive customers, including terminating service in extreme cases. How, then, did the Daily Stormer remain online for so long? One must look no further than to Cloudflare, the San Francisco-based firm providing its housing. Cloudflare also boasts a client relationship Stormfront, a message board with the slogan “White Pride World Wide”, as well as numerous other hate organizations listed here. Cloudflare eventually succumbed to pressure and terminated the Daily Stormer — and for that, they should be commended — though not without a bizarre and entirely self-serving blog post by its chief executive, attempting to morph the tragedy into a PR campaign for his company.

In their own words, Cloudflare’s service is that of a “next generation Content Delivery Network”. Though they do not house a web site’s original source material, or forums and associated compute resources — these “origin sites” commonly live at public cloud providers like Google, Amazon, Microsoft, or Packet — they are a web site’s public face, providing security and caching services to ensure the site loads quickly from around the globe.

How the Cloudflare service works

According to Cloudflare’s leadership, they are not responsible for policing the activities of their clients, due to their status as a “common carrier”. The premise of common carriage is simple, with origins dating back to the Communications Act of 1934: just as a shipping company cannot reasonably guarantee every package it transports is free of contraband, neither can a telecommunications provider guarantee that all of its customers’ telephone conversations are lawful. The claim is itself questionable, as Cloudflare is not a regulated monopoly providing phone service. Nonetheless, Cloudflare states that, to the letter of the law, they will only remove web content when so ordered by a court of law or investigative process. And to their point, one might argue that sites like the Daily Stormer do not instruct readers to violate the law explicitly, though the language and content displayed is provocative, especially in the presence of individuals who might be goaded to illegal acts.

Acceptable Usage Policy as a Tool

A far cry from the censorship claimed by Cloudflare’s chief executive, responsible players in the cloud hosting and content delivery business employ an instrument called an Acceptable Usage Policy (AUP), which extends beyond our basic legal obligations to promote corporate responsibility and Internet security. As an example, Amazon prohibits “activities that are illegal, that violate the rights of others, or that may be harmful to others, our operations or reputation”. Rackspace goes further to call out hate speech as that which is “excessively violent, incites violence, threatens violence, or contains harassing content”. Google, on its cloud and YouTube services, prohibits “speech which attacks or demeans a group based on race or ethnic origin, religion, disability, gender, age, veteran status, and sexual orientation/gender identity”. These policies are not only common among larger industry veterans; BelugaCDN, an up-and-coming CDN founded around the same time as CloudFlare, also requires that its customers not “use the Service to host content which is intended primarily to be offensive in nature”. Suffice it to say, these sites would not be welcomed customers of any of the aforementioned industry leaders, or at countless smaller providers providing similar services.

Why are similarly provisions noticeably absent from CloudFlare’s Terms of Use? Put simply, it’s the old cliché that any press is good press. Its CEO, “recovering lawyer and former ski instructor” Matthew Prince, thrives on appearing on television and print publications defending the first amendment rights of some of their more deplorable customers. Meanwhile, chief counsel Doug Kramer, hailing from a background in constitutional law, has spoken with ProPublica justifying their practices with similar logic. Both Prince and Kramer have backgrounds in legal machinations, not Internet operations, and have remained intentionally deaf to recommendations offered by those with the relevant subject-matter expertise — including those inside their own company. While the firm employs experienced network operators and security professionals well versed in industry best-practices, these individuals have not been able to break through to their leadership.

Denial of Service and Double-Edged Swords

Regardless of where one stands on potentially hateful political speech versus free expression, it is important to consider another unfortunate side effect of Cloudflare’s loose AUP enforcement: their role with Internet Denial of Service (DoS) attacks. A core feature of Cloudflare’s service is protecting web sites against hacks and DoS attacks; though its basic service is free, many clients opt to purchase plans offering higher levels of protection against these attacks. As such, a classic conflict is born: it is in Cloudflare’s monetary best-interests to do as little as possible to eradicate the Internet of these threats, so that clients will keep on purchasing their “protection” services. Indeed, Cloudflare provides its services to a large number of criminal “booter”/”stresser” services, where anyone with a credit card or cryptocurrency can order crippling large-scale attacks against Internet sites with just a few clicks. Possible criminality aside, this practice is very clearly at odds with industry best-practices, and casts doubt on the ethicality of the company’s upper management.

Once upon a time, I naively attempted to report these sites to Cloudflare, and was referred to Justin Paine, its head of “trust and safety”, who uses his Twitter to heckle complaintants with legitimate service or abuse issues, mockingly suggesting that they submit a ticket (only to get summarily ignored). After a prolonged and somewhat circular conversation, it became clear that this was not merely a case of incompetence but rather a formal corporate policy: CloudFlare will simply not budge and do what’s responsible regarding these sites. I feel bad even mentioning DDoS in the same light as a national tragedy involving a loss of life, however it bears worth noting that the company’s arrogance and associated poor stewardship run deep.

Cloudflare has also claimed they’re limited in their ability to take down sites due to active law enforcement investigations. Anyone who’s actually been party to these kinds of investigations will tell you this claim doesn’t pass the “sniff test” for various reasons best not detailed here, among them that the offensive customer sites remain up indefinitely, extending beyond a specific investigation. This claim is even contradicted in Prince’s most recent blog, in which he claims to have “never provided any law enforcement organization a feed of [Cloudflare’s] customers’ content transiting [Cloudflare’s] network” — a vital step in any such investigation.

Ironically though, Cloudflare is not afraid to call on the Internet community to provide filtering at a backbone level, when it’s necessary to support their business operations. During a large-scale Denial of Service attack in 2013, Cloudflare famously called upon the London Internet Exchange (LINX) to filter traffic traversing its infrastructure, a notable neutrality violation which was, at the time, without precedent.

Next Steps

Where do we go from here? An important first step would be CloudFlare developing an AUP with actual teeth, and thus strangling access to bad actors. While doing this would not immediately and permanently eradicate bad actors from the Internet, it would result in a “fast flux” (rapid migration) of these harmful sites moving onto new providers. Eventually, these providers would remove them for violating these terms, and they’d have nowhere left to go — not dissimilar from the spam problem of the late 90s/early 00s, now but a distant memory.

Perhaps pressure can be brought to bear from CloudFlare’s major accounts. Who at MIT, Zayo, FitBit, OKCupid, Digital Ocean, Medium.com, and the like wants to explain to their boards of directors or trustees, employees, and students why they are doing business with a company that consciously decides to serve them on the same infrastructure as hate groups,white supremacists, and online criminals?

Likewise, as an industry, we need to make it clear that Cloudflare’s behavior is aberrant, and not the norm. Their presentations on so-called “best practices” at technical conferences are thick in irony, until they manage to clean up their own house a bit better. Counter to recent media reports, citing common carriage is not an acceptable catch-all, as most industry players ascribe to a higher set of professional standards.

As I’m still at a loss for words over the display of violence, hatred, and inhumanity of the past few weeks, and because it’s said that complex concepts are best explained with cartoons and Internet memes, I think the below cartoon from xkcd best explains the situation at hand. Until we see some positive changes in these areas, Cloudflare’s senior leadership will continue to have blood on their hands.

xkcd #1357