The ‘right to be forgotten’ is absurd, lets aim for the ‘right to explain yourself’

The right to be forgotten (“TRTBF”) fiasco between the EU and Google is not getting the proper media attention and scrutiny it deserves, but it would definitely star in future history lessons as an anecdote portraying complete failure in comprehending reality coupled with lack of creativity on both sides.

This must concern the yet to be discovered fifth fundamental force of physics — when a legal dispute involves out-dated law and new technology, sophisticated entities elect to enter all available dead ends before reaching their mutual destination.

What is it all about?

The best way to explain TRTBF is by storytelling one case of its actual implementation. So, witness this 2003 article published by the British Telegraph, covering a story about a four years sentence handed in Paris to a British woman who ran a ring of 600 call girls.

While there is no dispute this article is accurate, concerns a public affair widely covered at the time and is lawfully available today online on The Telegraph (as well as other online news sites), it is omitted from Google’s search results (on Google’s EU domains) when Googling the name of one person mentioned in that article and relevant to its content. Instead, Google provides at the bottom of the censored search results page the following:

“Some results may have been removed under data protection law in Europe. Learn more

Yet, the same censored link to the article remains available when Googling other content relevant to the article and is also available when Googling that person’s name from the Google dot com domain or other domains outside the EU.

The anonymous individual responsible for the above example is one of 320,000(and counting) EU residents who elected to fill out the form Google was ordered to provide according to a ruling by the Court of Justice of the EU, as part of a process meant to prevent the public from reaching lawfully available information online when Google (and other search engines) judge it to be:

“inadequate, irrelevant, no longer relevant, or excessive”

What we’ve got here?

As a result of that ruling there are scores of dedicated Google employees who are tasked daily with deciding for the EU population (and if the French regulator has his way then it would cover the entire globe) what sort of accurate and available online information relating to a certain individual is not relevant to us and so would be harder, in fact impossible for the less tech-savvy, to access.

If you are thinking about actual privacy infringement issues such as stolen private data, information about sex crime victims or hurtful data about minors, think again, that kind of understandable “censorship” is handled by other laws and implemented by Google for years without any connection to TRTBF.

No, this concerns us Googling a potential business partner and remaining ignorant about the fact he was publicly declared bankrupt 10 years ago or forged checks and paid his debt to society, or Googling a potential spouse while remaining in the dark about online interviews she gave to several large newspapers regarding her previous profession as a top escort girl, or being oblivious to the fact that a person you are in contact with was an active Nazi sympathizer and recently asked Google to remove interviews he gave as such and are available online on several prominent websites.

Why this cannot stand?

Whether you relate to or disregard the motives, including genuine human compassion, behind TRTBF, which aims to allow a person not to be hunted or humiliated by certain past published items he was responsible for and maybe even deeply regrets today, this can have no relevance on the need for action to abolish TRTBF.

The reason is simple and has nothing to do with the right of free speech which Google tried in vain to use in this case. As the EU ruling and law applies only to search engines and specifically allows for the Google censored content to be freely available on other websites including in an aggregate form, this creates an abuse to a fundamental and much more absolute human rights issue — social equality.

The EU regulations allow for individuals with the right kind of “Internet education” to easily get access to information that is out of reach for less fortunate individuals who only use the censored Google engine to gain knowledge. This kind of modern and gross social inequality is humiliation in its ultimate form and cannot be accepted in any way in this modern era. When balancing the right of an individual to control Google results regarding his or her personal data (available lawfully online) with the public’s right to know and distribute information, one can argue for TRTBF, but when balancing that with the right for social equality in gaining access to publicly and lawfully available information, TRTBF must make a complete surrender, the same way a library cannot decide that certain books are available for tall people alone prohibiting the use of ladders for the rest of the population.

What can be done but was not suggested by Google?

Leave the current process of TRTBF in place but change its outcome (and its name…). Instead of deleting links to content when Googling a name of an individual, keep the links in question as-is and allow the individual who filled out the Google form to provide next to such kept link a content or link of his choosing allowing him to provide his explanation/view of the content about him he thinks is “inadequate, irrelevant, no longer relevant, or excessive”.

It other words, instead of unequally censoring certain data from a defined group of individuals, which is a built in feature of TRTBF, allow all individuals to get more relevant data from the person they Googled and provide him the opportunity to convince others interested in his past, that such lawfully displayed content about him is not relevant today.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.