“Understanding the Human Rights Risks Associated with Internet Referral Units” by Jason Pielemeier and Chris Sheehy
Since 2010, a small number of European governments have created formal, government structures for flagging alleged terrorist content directly to companies for voluntary removal under their respective terms and conditions. A review of these Internet Referral Units (IRUs) conducted by the Global Network Initiative, with help from Harvard Law School’s Cyberlaw Clinic, indicates that such efforts pose real risks to freedom of expression and privacy.
Now, a proposed European Union regulation would codify and expand this practice, requiring all member states to establish competent authorities to carry out such referrals. This post reviews the risks that IRUs present and points out how article five of the proposed EU regulation on preventing the dissemination of terrorist content online, as currently drafted, could exacerbate some of them.
Countries around the world are looking at ways to address what they consider to be problematic content online. Many have already established similar, albeit-even-less-transparent efforts to use platforms’ terms and conditions. What the EU decides to do going forward will have implications for human rights online far beyond Europe.
What Europe’s Experience with IRUs Can Teach us About the Proposed EU Regulation
The first IRU in Europe, the Counter Terrorism Internet Referral Unit (CTIRU), was established in the UK in 2010, and its influence can be partly credited for Europe’s wider adoption of IRUs. This was followed by the creation of an IRU within Europol (EIRU) in March 2015, the establishment of a mechanism for referrals and legal orders under French law in March 2015, and the development of an IRU in the Netherlands in September 2017. While recent Europol statements indicate that additional IRUs now also exist in Belgium, Germany, and Italy, this post, focuses on the first four, for which the most public documentation exists.
GNI has previously expressed concerns with IRUs. In a 2016 policy brief, “Extremist Content and the ICT Sector,” GNI noted concerns regarding (i) the potential for such mechanisms to circumvent legal procedures and called on governments making such requests to be (ii) transparent and accountable for such referrals, as well as to (iii) provide users with access to effective remedy. GNI also stated clearly that (iv) “governments must not compel ICT companies to change how they develop and enforce their [terms and conditions].”
Our review of these four mechanisms reveals some variation across them on these points (see the comparison at the end of this post for further background). Unfortunately, rather than “leveling-up” safeguards and good practice, the provisions outlined in article five of the proposed EU regulation would in several instances result in fewer safeguards and could thus create more risk for users’ rights.
Circumventing legal procedures: While the Dutch and French mechanisms require preliminary determinations (albeit by law enforcement officials, not judges) of illegality under their respective laws prior to referral, the UK’s CTIRU allows government personnel to refer content they think may violate a hosting services’ terms and conditions, without determining if that content violates UK law. The proposed regulation is completely ambiguous about whether content adjudication under article five must, or even should, be assessed against domestic or EU law prior to referral, leaving open the possibility that a potentially large number of “competent authorities” within each member state could refer content for removal without making any effort to assess its legality.
Transparency and accountability: All of the IRUs reviewed here have been criticized for a lack of transparency. Even those that have issued transparency reports, such as the French mechanism and the EIRU, tend to mostly include cumulative statistics on referrals or “content removed” in ways that make verification and accountability a major challenge. While the proposed EU regulation does set out certain “transparency requirements” for hosting services, it is completely silent on what corresponding transparency should be expected by member states. Refusing to make any requirements, or even suggestions, with regard to what appropriate government transparency regarding referrals looks like represents a missed opportunity and opens the proposal up to real questions about its compliance with fundamental principles of democratic accountability.
Access to remedy: There is generally little codification, let alone enforcement, of access to remedy for affected users and content providers in the current IRU frameworks. This leaves companies responsible for providing notice and remedy to users — even in instances where the content was flagged initially for its illegality and not for its lack of compliance with companies’ terms and conditions. Here, the French approach stands out since it allows the French Data Protection Authority to review each referral and allows hosting services to request judicial review, perhaps because referrals under that system are clearly based on a determination of illegality under French law and can be followed quickly by legal orders. While the proposed EU regulation requires hosting services to establish “effective and accessible” complaints mechanisms, it is completely silent on if, when, and how member states must, or even should establish complementary procedures for oversight of and remedy for government referrals.
Compelling companies to change their terms and conditions: To date, these IRUs have operated on a “voluntary” basis, leaving final decisions on content takedowns to hosting services’ discretion. However, under the proposed regulation, companies would face legal penalties and could be compelled to implement “proactive measures” if they fail to “expeditiously” assess competent authorities’ referrals and/or inform them of their determinations. These requirements imply that companies will have to be able to determine which referrals are coming from governments, and suggest that government referrals will have to be afforded priority over referrals suggested by other sources. For certain platforms, one or both of these requirements could result in significant, government-mandated changes to how their existing terms and conditions are enforced.
European authorities should carefully consider whether the Internet referral model is worth expanding. In addition to the potential impacts that this could have for EU citizens, it is also important to consider the possible precedent that such a step could establish globally. At a minimum, serious efforts should be made to determine how article five of the proposed EU regulation could be strengthened to enhance transparency, accountability, and remedy.
Jason Pielemeier is the policy director at GNI, and Chris Sheehy is a program and communications officer. Read more about them on the GNI team page.