Article 13: the Internet Filter — Everything You Need to Know About the Controversial Proposal

Emanuel Karlsten
8 min readMar 25, 2019

--

Photo credit: Marco Verch, CC BY

What is Article 13, the so-called ‘internet filter’? As part of my reporting for the EU’s final vote on the Copyright Directive, I look at the two most controversial parts, Article 11 and Article 13.

To summarise, Article 13 states that the person who owns an image, text or video must be paid when it is distributed. If the work is published without the owner’s permission, it’ll be a crime — and the site owner is to be held responsible for it.

The problems are obvious: how should a site with billions of users like Facebook, or with fewer resources, be able to create a function that differentiate content in this way? Some type of filter is needed.

This is already a problem today, because it’s difficult to create smart filters. YouTube has one in place, named ‘Content ID’, which has cost about $100m to develop. Even at that, this is how it works today:

Already today, it’s enough for just a trace of a copyrighted work to appear for it to be counted as an infringement for the YouTube filter, but if it also counts as a legal infringement, it’s you the user who is liable. With Article 13, the site itself also becomes liable, which means the site is going to want to be sure that never commits an offence. But how should a filter be able to determine what is critique, satire or parody of a copyrighted work?

During the final negotiations, attempts have been made to make better versions of the Copyright Directive. Sites are now free from liability for users’ copyright infringement, if they:

  1. attempted to obtain a license for any copyrighted material;
  2. made ‘best efforts’ to ensure copyrighted material wasn’t published;
  3. has a reporting system in place so that when something is reported as copyright protected it won’t show up again.

If any of the points are missed, the site is to be held responsible for the upload. So it would be enough for a platform to decide that they want to be a site hosting copyrighted material at the same time as not even attempting to secure licensing, in order to have an obligation that no crime is committed on their site, that is to say, obtain a filter. The same applies to point 3: if a site is to be able to ensure that a reported work is not re-posted, they’d need a filter that recognises the specific work in question.

More things to consider:

The copyright holder must have asserted their claim to copyright.

Almost everything we publish on the internet can fall under copyright. All the statuses, pictures and movies we ourselves produced are of course our own, but in Facebook’s terms of use for example, we give them the right to display that which we produce and publish ourselves. What Article 13 does is prevent others from publishing your work on social media — if you want. If no one has claimed copyright of the work, the site can publish it freely. If, on the other hand, someone has claimed copyright, Facebook must either 1) obtain approval and pay for licensing or 2) ensure that the work is never published again.

The difficulty in this is clear: how’s Facebook to know that the person claiming copyright actually owns the work? Imagine the following scenario: your friend takes a picture you don’t like. In theory, it would be enough for you to claim that you took the photo in order for the site to be forced to filter it out (or pay). Since it’d be a criminal offence for Facebook to publish copyrighted works, it is more important to quickly remove it than to question the author’s veracity.

Sites should have made ‘best efforts’ against copyright infringement.

Paragraph 4(b) of Article 13 states that the site must have made ‘best efforts’ “in accordance with high industry standards of professional diligence” to ensure that copyrighted material is not made unlawfully available. Recital 38b further clarifies that a site must not stop other legal material from being viewed and used. How this should be done in practice is not clear, especially since there is still no filter in existence so advanced that it would void accidentally stopping legal material from being posted.

In practice, Article 13 may work like this: copyright organisations will provide a list of all copyrighted texts, images, sounds and movies. Sites would then need to create a tool that can cross check this list against everything its users are uploading, as well as record how works have been used and remove unauthorised posts.

This sounds impossibly expensive for smaller sites. However, there could be further exceptions:

In part, it is explained in article 2:5 that if the site is not-for-profit (like Wikipedia), a buying and selling site (like Blocket), or a cloud service for internal use (such as Dropbox), then the directive isn’t applicable.

In recital 37a, the Directive also seeks to clarify that not all small services need to be affected by the directive, but only “connected services that play an important role in the digital content market by competing with other online services, such as audio or video streaming services, for the same target group”.

37a is trying to say that Article 13 should primarily focus on major sites like Google and Facebook. That is to suggest, those who threaten Spotify, HBO or other services that clearly generate money for copyright holders. However, 37a isn’t part of the articles, but the recitals.

Daniel Westman: lawyer and online researcher — watch the full interview here (in Swedish).

Daniel Westman is a lawyer and online researcher. He believes that the placement of the wording in the Directive is important when courts eventually decide how the directive should be interpreted.

“That which courts are bound by is what’s stated in the articles, the recitals are merely interpretation aids. If it were to be legally correct, they should have gone back and reformulated it in the articles themselves”, says Westman.

“The wording may be the result of hard negotiations, where the phrasing was introduced to appease critical voices”, Westman believes, “one way to defer the exact interpretation to a court at a later date”.

“You can see [the recitals] as a legal compromise tool; when you encounter opposing views, you can add them to the recitals and it won’t be a legally binding text”, says Westman.

Clearly, sites are to be forced to the negotiating table with copyright holders because of the directive. That’s the whole point, of course, but it doesn’t automatically mean that all content will remain on the sites. In the same way as it already works on YouTube today, some copyright owners will not want their images, video or music to be posted to Facebook. That’s when it disappears.

Private companies as police, prosecutor and judge?

The most controversial part of the directive is that which deals with punishment and liability. That the state, through criminal liability, wants to force companies to act as police, deleting content before it has even been published. The idea that private companies can have responsibility for citizens’ actions is, of course, not entirely new. For example, banks may have some responsibility for stopping customers from committing crimes before they commit them, or a landlord can be responsible for offences such as association if illegal activity takes place ‘under their roof’.

The difference now being that the intention is to hand over to a private company at least partial control over a central human right: freedom of speech. When sites are handed criminal liability by the state, it signals particular caution is to be taken, and that’s when it becomes more important to them to remove too much than too little.

Lawyer and internet researcher Daniel Westman believes that this is where the dividing line is drawn in the debate; between those who see Article 13 as a matter of freedom of speech and those who see it as purely a market issue. The latter have no problem with this market finally getting help to survive.

“If you see the internet simply as a forum for the market and the distribution of someone’s protected works, then this is quite unproblematic. If you see it as more than that, then it’s problematic”.

Daniel Westman also believes that it’s important to consider what might come next. If we open the door to prevent copyright infringement before publishing, what comes after it? The EU Commission has already talked about preventing terrorist propaganda in the same way, but it’s just as easy to imagine other crimes, such as publishing personal data. Should private companies be responsible for making that distinction and is it worth potentially blocking our ability to communicate in order to curb these crimes?

“Some might think it would be good, but it would be a huge restriction on freedom of expression. That’s why many people are serious about it, but it can appear to be of little consequence for the individual user”, says Daniel Westman

Is this the death of memes?

One of the most common points of discussion about Article 13 has been memes, the online culture where people use often copyrighted works to make light of, or offer comment on, the zeitgeist. Will they be banned?

Will memes like this one no longer be legal?

Well, they won’t be banned completely, but memes like this one contain copyright protected material owned by 21st Century Fox, so it’ll likely become stuck in filters. This means that Fox can decide whether they want compensation or if the clip should be blocked. However, in the latest version of Article 13, negotiators have tried to find a workaround in paragraph 5. It states that the following content should be excluded from the directive: “a) citation, critique, or review; b) caricature, parody or pastiche”.

There is, therefore, no explicit exception for memes, since memes are not always a caricature, parody or pastiche. In the image above, only one Simpsons scene is used to make a fun connection with Brexit. Or could it be a pastiche? All of this will end up being a decision for courts to interpret, meaning it’ll be up to a judge to determine what the whole of Europe should consider as the boundaries of parody and pastiche. There will be an obvious difficulty for the sites.

“The hard thing is to make that assessment beforehand. What’s actually a rip-off of someone else’s work and what’s protected as satire and parody? These are really heavy assessments”, says Daniel Westman.

“Making this decision is hard enough for legal experts and judges, but it’s even more difficult for the young people who just want to post a meme”, Westman continues.

MEPs now need to familiarise themselves with all of this and consider it carefully. Do they see it as a unique opportunity to force online conglomerates to the negotiating table with a crisis-stricken copyright industry? Or will they view it as an excessive curtailment of freedom of expression?

“To be completely honest, nobody knows exactly how YouTube or Facebook will act if they’re handed such a decision. It’s a question of whether they’ll take some risks in order to preserve freedom of speech, or be careful not to expose themselves to legal action”, says Daniel Westman.

This piece is funded by a Kickstarter campaign to monitor the European Parliament’s Copyright Directive proposal during its final stage of voting. Text and images are supplied under CC BY, a license that makes it free to share and redistribute wherever you want, provided you link back here with appropriate credit.

Read the original post in Swedish.

Note: the Visual Copyright Society in Sweden, The Swedish Society of Songwriters, Composers and Authors, and the Swedish Performing Rights Society all declined to be interviewed for this piece, but the Swedish Performing Rights Society have written extensively on the Directive themselves here (in Swedish).

--

--

Emanuel Karlsten

Swedish journalist travelling to the European parliament to cover the final copyright directive vote. Everything published on this site is under cc-by-license.