EU Terrorist Content Regulation Rights Sell-Out

Gabrielle Guillemin
4 min readApr 6, 2020

--

The EU Terrorist Content Regulation is perhaps one of the most extreme pieces of legislation you have never heard of. Courtesy of the coronavirus pandemic, one of the last trialogue negotiations between the European Commission, the European Parliament and European Council on the proposed Regulation in March has been postponed. But make no mistake, the end of the negotiations is fast approaching.

At stake is our ability to talk about controversial topics such as the war in Syria or to report live on terrorist attacks in Turkey. If the European Commission and European Council get their way, Big Tech could soon be required to filter everything we say, including privately, in the search for ‘terrorist’ material and prevent it from being uploaded. Jokes about the Gilet Jaunes or criticism of the action of armed forces in conflict zones could get caught in the filters’ net. Tech companies would also have to remove ‘terrorist’ content within one hour of the police saying-so. Removal orders could apply across the EU and if tech companies fail to comply, they could be fined millions.

The fight against terrorism online is undoubtedly an important one. The trouble is that while it may sound entirely reasonable to remove ‘terrorist’ material, there is little agreement on the definition of ‘terrorism’ itself. Only recently, the global environment protest movement Extinction Rebellion was included in a terror police list. Worried about radicalisation, Governments often seek to ban content that ‘glorifies’ terrorism, whether or not there is any real risk that it will incite violence. Deciding whether something incites or even glorifies terrorism also requires context: a news report and a pro Isis film could use the same footage. It is not as straightforward as, for example, determining whether content is child sexual abuse. But broad laws do not allow for nuanced discussions, news reporting, research or humour. The EU Terrorism Regulation is no exception.

Definitions are not the only problem. As ever, politicians continue to deputise censorship to private companies. They also continue to believe that technology is a silver bullet that can both detect material that is illegal while respecting freedom of expression. However, as experts have pointed out time and again, the technology simply isn’t there to perform those magic tricks. Indeed, YouTube reports that of the 108,779 video takedowns that were appealed between October and December 2019, 23 471 were reinstated. Given that the vast majority of videos are now automatically flagged, this suggests that the error rate remains high and that vast amounts of content are wrongfully removed. In any event, we don’t really know what’s happening since examples of how companies apply their ‘violent extremism’ content policy are few and far between. More importantly, ‘artificial intelligence’ getting it wrong can have dramatic consequences. For example, the human rights organisations WITNESS and Syrian Archive have reported how the use of machine learning algorithms has led to the wrongful deletion of troves of material documenting human rights violations in Syria, undermining the collection of evidence for the prosecution of war crimes. There is also mounting data suggesting that algorithms are biased and have a discriminatory impact, which is a particular concern for minority groups who are most likely to be affected by counter-radicalisation measures.

Tech companies and politicians reassure us that effective remedies will be put in place to challenge wrong decisions after the fact. That will be cold comfort for those whose content is removed in error. Companies’ appeals processes have been very uneven, with individuals waiting months for their complaints to be resolved and sometimes never getting any response. It is not even clear how individuals can pursue these complaints before the courts. Will governments step in and put in place remedies that are sufficiently resourced to be effective? Maybe. But as the French watchdog on terrorist content removals keeps lamenting, such resources are unlikely to be forthcoming. The Terrorist Content Regulation is going to mean a lot more content is removed at speed so that companies avoid sanctions. This is going to rely on automated takedowns so at the very least we will need effective remedies. However, these alone will not protect our free speech.

What then? Lawmakers must abandon mandatory filters and ‘proactive measures’ that would effectively require widespread surveillance of everyone’s communications in order to detect ‘terrorist’ content. If they don’t, they would do well to explain how such measures are compatible with the General Data Protection Regulation and ensure that private conversations are out of scope. They must ensure that removal orders are made by courts and that their definition of terrorist content excludes journalistic, educational, research, artistic and other material with a lawful purpose. Remedies must be adequately resourced.

The ramifications of the Terrorist Content Regulation for the EU’s policy in the Digital Services Act could be profound. It will set a precedent for what online content regulation could look like. No one is denying that governments should do something about terrorist content, but they must also respect our free speech and data protection rights. During the next difficult few months, when the importance of a free and open internet is becoming more apparent every day, there is time to negotiate an alternative that will protect those European values.

--

--

Gabrielle Guillemin

I am Senior Legal Officer at ARTICLE 19 leading its work on digital rights since 2011. I was previously a lawyer at the European Court of Human Rights.