The Digital Services Act: An Opportunity to Build Human Rights Safeguards into Notice and Action

Global Network Initiative
The GNI Blog
Published in
6 min readAug 17, 2020

By Emma Llansó, Director, Free Expression Project, Center for Democracy and Technology

The “notice-and-action” framework in the E-Commerce Directive (ECD) is the core of Europe’s approach to intermediary liability. Article 14 of the ECD provides a conditional shield from liability for online service providers: they are not liable for illegal third-party content unless they have “actual knowledge” of the content and fail to “act[] expeditiously to remove or to disable access” to it. This approach of notice-and-action (or sometimes “notice-and-takedown,” as in the U.S. Digital Millennium Copyright Act) provides liability protection to intermediaries such as content hosts only if they comply with certain specified procedures. It stands in contrast with the broad, essentially unconditional shield from liability in laws like Section 230 in the U.S. on the one hand, and strict liability regimes on the other.

It’s important to underscore that notice-and-action systems, and other intermediary liability frameworks, are intended to determine an online service provider’s potential liability for certain illegal third-party content. No liability regime can or should assign liability to intermediaries for users’ lawful speech. (Notice-and-action is also only part of the debate about online content regulation in the Digital Services Act (DSA) discussions, which also raise questions about systemic oversight of responses to illegal content and platform “terms of service” enforcement, which this post does not address.)

The ECD leaves it to Member States to flesh out the details of notice-and-action in national law, which has led to significant variability across the EU. As a 2018 report from the European Commission’s DG CONNECT found, “There is no common notice-and-action procedure, nor is there a common standard for minimum notice requirements.”

Human rights advocates have noted for years that the ECD’s notice-and-action framework fails to include specific safeguards for human rights. Between 2010 and 2012, the Commission conducted consultations on the E-Commerce Directive and notice-and-action, which yielded feedback from organizations such as GNI, EDRi, La Quadrature du Net, NetzPolitik, Bits of Freedom, and CDT, recommending key improvements to the framework. More recently, groups including Article 19 and Access Now have provided detailed recommendations, and a broad, global coalition has put forward the Manila Principles on Intermediary Liability, which describe best practices for a rights-respecting notice-and-action framework. In short, there is a lot of prior art on this topic that policymakers should heed.

Based on that work and other research and experience, here are a few key questions for and safeguards in notice-and-action frameworks that policymakers should consider as deliberations over the DSA proceed:

Who makes the determination of illegality?

Notice-and-action frameworks must specify what amounts to “actual knowledge” that content is illegal — in other words, what kind of notification about illegal content “counts” for exposing the intermediary to liability. Content hosts, for example, often receive notifications about content on their services from a variety of sources, including users, public officials, press reports, and other companies, and these notifications may include allegations that the content is illegal.

But in legal systems that respect human rights and the rule of law, the determination that someone’s speech is unlawful can only come from an independent arbiter such as a court. Intermediaries are not qualified to do such evaluation and should not be required to make determinations of illegality based on the allegations of a user, law enforcement officer, or other non-judicial source. If intermediaries can be penalized for making an incorrect determination of illegality, they will err on the side of over-blocking protected expression.

Notice-and-action regimes should ensure that intermediaries cannot be held liable for specific user content unless they have received an order from a court or independent adjudicator. This safeguard ensures that the legal framework does not create incentives for over-broad removal, while preserving the ability for intermediaries to act voluntarily to remove content upon notices from non-court actors. It also helps protect the rights of the speaker: people must have access to a court to challenge a determination that their speech violates the law. A notice-and-action system that requires intermediaries to make determinations of illegality can circumvent the judicial system. This risks severing the important link between people and the laws that bind them, and can make it difficult to hold governments accountable for the laws that govern our speech.

What constitutes a valid notice?

A notice-and-action framework should clearly specify the components of a valid notice — that is, a notice that can create actual knowledge on the part of the intermediary. This should include concrete elements such as:

  • the identity of the official issuing the notice,
  • citation of the specific legal violation and the law that authorizes the issuing of the notice,
  • the precise URL of the illegal content,
  • a description of the allegedly unlawful content (which could include information such as the timestamp in a video or specific sentences in a long post).

These formalistic requirements are more than just extra paperwork. They enable intermediaries to confidently reject improperly formed notices without risking liability. The sheer scale of user-generated content, with a recent study estimating that nearly half of the global population use social media, means that intermediaries are unlikely to be able to carefully evaluate every notice they receive. But clear provisions delineating the components of a valid notice enable service providers to quickly identify which notices are inadequate, and are an important protection against fraudulent or malicious notices.

What consequences are there for invalid notices or notices sent in bad faith?

Any system created to take down online content will be abused by people with malicious aims, such as targeting specific groups or individuals for harassment and silencing ideas and opinions with which they disagree. Any notice-and-action system must provide recourse for systematic abuse of the system and for notices sent in bad faith. This is true even when notice takes the form of a court order, as it is possible for bad actors to pass off fabricated “court orders” or orders based on illegitimate processes. Intermediaries typically do not have the time or expertise to assess the legitimacy of the process behind every order they may receive.

Instead, users need to have access to counter-notice procedures to enable them to challenge a claim (whether legitimate or bad faith) made against their speech. This is a core component of due process, and the speaker is often the person with the strongest incentives to challenge an illegitimate order. Users need to be made aware of the consequences of their counter-notice, such as the loss of privacy or anonymity and the increased risk that they will be held directly liable for their content. The notice-and-action framework should:

  • specify what remedies are available to the individual, and from whom, following a successful counter-notice or appeal;
  • make clear that the intermediary is shielded from liability for content it restores following a successful appeal to a court; and
  • include penalties against notices sent in bad faith as a deterrent against repeated abuse of the system.

What information is available to users and the public about how the notice-and-action system is operating?

Finally, notice-and-action frameworks should ensure that transparency is baked into every step. Users need to understand the nature of the claim made against their speech, and court orders to remove content should only be accompanied by gag orders (prohibiting user notification) in the most exceptional cases.

Other users on the site should also be able to understand how the notice-and-action system is affecting the information they see. Service providers must be able to indicate the reasons for removal of specific content by placing notices at the URL where the content had been located.

Finally, both service providers and governments should publish regular transparency reports that include information about the number of legal demands for content removal in the reporting period, as well as information about the underlying laws and the nature of the content removed. Currently, though many online services provide regular transparency reporting about government demands for content removal, few if any government agencies provide parallel reporting about the number of such demands. The implementation of various types of transparency measures will enable people to understand how both the law and company practice are shaping their access to information and ultimately participate to hold both government and corporate actors accountable.

As deliberations over the DSA continue, EU policymakers should maintain a focus on protecting and promoting users’ fundamental rights. Incorporating these important safeguards into Europe’s approach to notice-and-action is an easy place to start.

--

--

Global Network Initiative
The GNI Blog

GNI is the only multistakeholder initiative dedicated to advancing freedom of expression and privacy in the information and communications technology sector.