Taking down terrorism online while preserving free expression

Courtney Radsch
4 min readMay 15, 2019

--

Terrorism has gone viral but world leaders are struggling with how to address it without abrogating rights.

Terrorism has gone viral. The Christchurch massacre at two New Zealand mosques that left more than 50 people dead was livestreamed on Facebook. In 2015, two reporters in Virginia were shot while broadcasting live and the shooter uploaded videos to social media channels that are still available today. And in 2014, the beheading of James Foley by ISIS forces in Syria spread across the internet within minutes of being uploaded. The world can’t figure out what to do about staunching the flow of this content online.

Now the leaders of France and New Zealand have a proposal. They are meeting this week in Paris to seek voluntary pledges from other world leaders and major tech platforms aimed at eliminating terrorist and violent extremist content online.

A leaked draft of the Christchurch Call, as it is been dubbed, includes assurances about maintaining a free and open internet and respecting human rights. But it calls for increased censorship, expanded government regulation, and more intensive coordination across platforms. The pledge fails to explain how to reconcile the goals of preventing and removing violent extremist content without empowering government censors or privatizing censorship.

The focus on content moderation seems like a losing tactic in the never -ending game of whack-a-mole that defines efforts to prevent the circulation of violent content.

The question over what to do about terrorist content and graphic violence online is not new, but it has taken on new urgency in the aftermath of the Christchurch massacre because the attack was designed to go viral and leverage the algorithmic power of social networks to spread far and wide, making its removal near impossible. Facebook, YouTube and Twitter all tried to scrub their services of the video and its progeny, but it still takes only a minute or so of diligent searching to find the video and the killer’s manifesto. It’s also noteworthy that the video of the Christchurch massacre was livestreamed on Facebook for 17 minutes, during which time not a single person flagged it as problematic content.

The Christchurch Call includes commitments by online service providers to prevent the upload and dissemination of violent extremist and its permanent removal. But the sweeping focus on online service providers risks pushing censorship into the infrastructure layer, commonly thought of as the layer that makes the internet work. This could include domain name service providers, internet service providers, and cybersecurity providers. When Cloudflare, a company working at the infrastructure level to provide DDoS protection to about 10 percent of the world’s websites, decided to remove the far-right Daily Stormer website from its platform it provoked a controversy over such a blunt approach to removing extremist content.

The draft of the pledge also calls for broadcasting standards to stave amplification of extremist content and encourages media outlets to apply ethical rules when reporting on terrorism. One of the basic roles of the media is to provide information and coverage of events or proclamations depicted in content disseminated by terrorist or extremist groups such because it is newsworthy. Yet the unintended consequences of anti-terrorism efforts leading to censorship of news media have been seen around the world, from Australia to Syria.

Last year, an Australian regulator deemed an article about ISIS recruiting published on one of the country’s top news sites to be promoting terrorism, forcing the outlet, news.com.au, to remove the offending article, even though the self-regulatory press council had determined it was in the public interest.

When the self-proclaimed Islamic State established its capital in Raqaa, Syria, an online search for Raqaa would be overwhelmed by ISIS propaganda, especially in English, the founder of the citizen journalism organization Raqaa is Being Slaughtered Silently told me. Despite the great peril RBSS reporters put themselves in to bring the world news from the ISIS stronghold and its explicit efforts to counter information produced by the terrorist group, RBSS had its accounts shuttered and content removed from various platforms because it was incorrectly identified as violent extremist content.

Furthermore, the calls for scrubbing violent extremist content from the internet risks legitimizing efforts by countries without democratic safeguards or due process to restrict independent or critical reporting by opposition or minority groups or on terrorism itself. Cameroonian journalists Ahmed Abba spent more than two years in jail for reporting on Boko Haram, and nearly all of the journalists jailed in Egypt worked for Muslim Brotherhood outlets. The rush to eradicate terrorist and violent extremist content from the internet and impose broadcasting standards and media ethics carries high risks of being misused by governments around the world.

It’s not easy to stand up for press freedom when extremists use our media to disseminate their propaganda, and the power of online platforms to take their evil viral. And the recognition by countries that are seeking to regulate away this problem that they and the platforms must seek to respect human rights and civil liberties is important and welcome. But perhaps it is time to deal with a difficult truth. Preventing the dissemination of terrorist content online while respecting human rights and free expression is hard but essential work. The Christchurch call is an understandable response to tragic events. But it is more likely to strengthen the hand of the censors than mute the voices of the terrorists.

A version of this article was originally published as an op-ed in the New Zealand Herald

--

--

Courtney Radsch

I’ve consolidated Medium accounts, please visit my page and follow me at https://medium.com/@courtneyr