Digital prejudices

Martin Vetterli
Digital Stories
Published in
3 min readNov 23, 2018

Where we learn that computers can be as good as humans at recognising and discriminating things, but with the same fatal consequences.

Photo by Mika Baumeister on Unsplash

One of the most remarkable abilities of the human cognitive system is its power of generalisation. For instance, in order to recognise a cat, one doesn’t need to have explicitly studied all possible cat breeds. Even a never seen before, strange-looking hairless cat will quite easily be identified as a cat. However, the power of generalisation can be hijacked, willingly or not, leading to human prejudices. Can this also apply to computers and algorithms?

The recent progress in the field of artificial intelligence lets machines achieve near-human performance in certain tasks, such as for example in recognising an object, or in comprehending a language. These tasks, similarly to the cat example above, rely on the ability of the computer to infer general rules from a limited set of examples. It is tempting, therefore, to speculate that artificial intelligence will one day give us algorithms with a similar power of generalisation that humans possess, but without prejudices. Unfortunately, things are not so simple.

As an example, suppose that I want to train a computer to infer the IQ of a person just from a photo of its face (I know this sounds ridiculous, but such approaches unfortunately do exist). To do so, I will need to prepare a set with hundreds of pictures of people, together with their known IQ score, so that I can then train an algorithm to discover a hidden link between face type and IQ. If the algorithm finds that, the system will then be able to generalise that rule to previously unseen faces, and thus guess their IQ based on a photo.

Of course, the training set will need to contain as many photos as possible, ideally of people of all intelligence levels. But suppose that, due to distraction, oversight (or humaneness!), a database was used that contains a lot of photos of intelligent people with black hair and only a few photos of intelligent people with blonde hair. Well, now the machine, which will not have access to the immensely more complex context of what it means to be human, could infer that, in general, black hair is an indicator of high IQ! In fact, similar unfair discrimination did happen in the past, such as on job offer websites where, say, male applicants were offered higher salaries automatically.

Luckily, this phenomenon called algorithmic bias is well known and studied as we speak. However, this does not mean that our algorithm is wrong! It just shows that the system will reflect the errors (or prejudices) that a researcher commits in preparing the learning dataset. But it shows once more that machines, as well as humans, are only as intelligent as their set of experiences allows them to be. And faced with a biased world, an algorithm will be no better than a narrow-minded bigot.

--

--