Moral Algorithms and the Right to Be Forgotten

Matt Muller
MattMuller.info
Published in
4 min readAug 21, 2015

A lot of ink has been spilled (pixels lit up?) over Europe’s right to be forgotten — the concept that individuals have the right to control how information about them is spread and used across the internet. It’s a concept that, at a very abstract level, is hard to argue with. It’s right up there with “poverty is bad” and “the law should be fairly applied to everyone.” Unfortunately, the current application of this right to be forgotten leaves a lot to be desired.

The right to be forgotten stems from the notion of informational self-determination. In other words, individuals can’t be compelled to consent to unlimited data collection, processing, and sharing. If I decide I no longer want to do business with a company, I should be able to tell them that they no longer have the right to manipulate or profit from the information I’ve shared with them.

Now, however, this principle is being applied more broadly. This principle of informational self-determination is being extended to information shared about me, not just information shared by me. This is a critical distinction — if Google publishes a link to an unflattering blog post about me, for example, I can now tell Google “Hey! That’s information about me that I don’t want you to use.” Effectively, I can force Google to remove a link from their index.

Now, there are some limitations on this. In general, Google is only required to remove links that are “inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes for which they were processed.” Of course, the underlying websites remain active — it’s just significantly harder to find them.

Okay, so a link about me is removed because it’s no longer relevant. What if someone then publishes a news article stating that I had a link removed because it’s no longer relevant? The EU has my back: a court just ruled that Google must also remove links to articles about them removing specific links. It’s just turtles (and removal requests) all the way down.

A lot of people in the US see this as censorship. A common refrain here is that “the best antidote to speech you dislike is more speech.” In other words, drowning out the outdated, perhaps unfavorable information with fresher information that provides better context.

This gets to the core of what the right to be forgotten is all about, and why the current solution is inadequate. People care about removing old links because sometimes old information doesn’t reflect who they are now. And yes, I’m assuming for a moment that the right to be forgotten works perfectly, and corrupt politicians who want to whitewash their pasts are prevented from submitting illegitimate information removal requests.

Humans use past behavior as an imperfect predictor of current and future behavior. It’s why we run background checks on job applicants, and why we don’t loan money to people who have never paid us back.

It’s either fortunate or unfortunate (depending on the circumstance) that people change. However, I have yet to see a search engine designed specifically to embrace human imperfection. The most common signals that these algorithms use to surface search results — inbound links, number of clicks, etc. — only reflect popularity, not accuracy. The most popular depiction of me may not be the most accurate.

This is, quite literally, bullshit.

Philosopher Harry Frankfurt defined bullshit as a “statement grounded neither in a belief that it is true nor, as a lie must be, in a belief that it is not true. It is just this lack of connection to a concern with truth this indifference to how things really are that I regard as of the essence of bullshit.”

Google’s search results are bullshit. Sure, it’s the most popular search provider in the world right now, but the internet is a distributed system. All of the individuals who have expended their time and effort on getting a few hyperlinks removed from Google’s indexes will be right back at square one when the search giant is deposed by whatever comes next.

That’s why, when it comes to information about people, our search algorithms need to be moral. We should be far more focused on creating a fair, accurate representation of who someone is now, and not a distorted picture of who they were as a 15-year-old with a MySpace.

There are a lot of ways we can solve this. A community-driven search engine might promote a page of information that I’ve curated, alongside feedback and information about me that has been contributed independently or indexed by algorithms. It might include mechanisms for disambiguating search results, so that only information about me would show up on my results page. It could place an even stronger emphasis on current information, since information about me from a decade ago (both good and bad) is less relevant at this point.

There are plenty of flaws with this approach too. But rather than just naysaying Europe’s right to be forgotten, we should focus on building solutions, and putting pressure on the gatekeepers of information to show us what’s true, and not just what’s popular.

--

--