Can Algorithms be Racist?

Davide Andrea Zappulli
Science and Philosophy
5 min readNov 1, 2020

--

Photo by Markus Spiske via Unsplash

Twitter’s face-recognising algorithm is racist. A lot of users have thought that after a viral tweet showed that the algorithm seems to have a preference for recognising faces of white people. There is little doubt that if the codes running behind one of the most popular websites were racist, it would be a scandal. However, we should be more cautious: does it make sense to say that an algorithm is racist in the first place?

Looking carefully, it’s clear that these sort of claims tend to be based on a misunderstanding of what Artificial Intelligence (AI) is. First of all, let’s try to get our heads clear around AI then.

What device are you using to read this article? Your smartphone? Tablet? Laptop? Whatever it is, that device could easily outplay in a chess game the best players in the world — pretty brilliant people as we all know. Does it mean that your device is particularly smart? Not at all. In fact, it only means that machines are extremely good at performing certain tasks. A laptop is capable of doing things that would require the use of intelligence to be done by a human, but in itself it’s not smarter than a coffee machine.

This was already well recognised by Alan Turing. In his famous 1950 paper “Computer Machinery and Intelligence”, he wrote that the question of whether machines can think was “too meaningless to deserve discussion”.

One way to think about this is the following. Talking about AI, the term ‘artificial’ is not a so-called specifying predicate, but a modifying one. In other words, artificial intelligence is not a particular kind of intelligence which is artificial, it’s something utterly different.

Being devoid of the capacity to think, AI algorithms lack intentions as well. They just run computations according to the way they’ve been programmed; accordingly, there is no meaningful sense in which, say, Twitter’s algorithm chooses to identify white people. Indeed, wouldn’t it be very bizarre to blame a machine for how it does its computations? A machine is no more passible of blame than a gun for having been designed to kill people. No one would create an artificial prison for bad algorithms: it wouldn’t make any sense.

As Luciano Floridi noticed, the idea of machines being intelligent in a strong sense might derive from science fiction. However, the idea is really just good for novels.

[T]here is no reason to believe that anything resembling intelligent (let alone ultraintelligent) machines will emerge from our current and foreseeable understanding of computer science and digital technologies. (Floridi 2016)

But if that’s true, that is, if what algorithms do is not thinking, then it seems very unreasonable to regard an algorithm as racist. We can say that people are racist because they think and have choices; an algorithm can’t be more racist than your car.

So, problem solved? Unfortunately, the reality is not that simple. True, algorithms can’t be racist, but that doesn’t imply that there is no problem with algorithms and racism or discrimination more generally. What we have been saying until now is useful to identify where the issue really lies.

In fact, the problem hasn’t to do with algorithms, but with us. Algorithms of AI are trained on large databases and function statistically: if the stored data contain biases, algorithms will probably maintain those biases and often exacerbate them. This fact has enormous ethical consequences, especially insofar as AI algorithms are deployed to make decisions in extremely sensible domains, like healthcare and justice.

There is plenty of emblematic cases. To name one, in 2010, New Orleans’ police department was full of cases of power abuse and discrimination against black and LGBTQ people. Of course, as a result of this fact, data regarding arrests and police reports were deeply biased against black and LGBTQ individuals. However, when only a year later the police department decided to begin using algorithms of predicting policy (which are used to predict crimes) they didn’t polish the data but gave to the algorithm the old ones. As an obvious result, the algorithm predictions were completely biased against black and LGBTQ people.

New Orleans’ one is not the only case. Indeed, there are plenty of them. As reported in a paper on Nature,

An algorithm widely used in US hospitals to allocate health care to patients has been systematically discriminating against black people, a sweeping analysis has found. The study, published in Science on 24 October [2019], concluded that the algorithm was less likely to refer black people than white people who were equally sick to programmes that aim to improve care for patients with complex medical needs. Hospitals and insurers use the algorithm and others like it to help manage care for about 200 million people in the United States each year. (Ledford 2019)

It’s easy to see that in those cases, the problem wasn’t with the algorithms, but with decisions taken by conscious human beings. It is people who have deliberately decided to feed the algorithms with biased data. Algorithms have been used irresponsibly, and this generated social injustice.

There is an essential point here: it’s precisely because algorithms haven’t intentions and can’t think that we ought to use them thoughtfully. Once one thinks about it, it becomes rather obvious: it’s precisely because a kitchen knife can’t prevent itself from being used to kill people that knives’ users must be responsible. The same goes with algorithms.

We should use algorithms to improve our ability to decide; to make our decisions more fast and reliable. However, they can’t wholly substitute the human role in taking decisions. As Kate Crawford — co-founder and co-director of AI now — has pointed out,

If the data itself is incorrect, it will cause more police resources to be focused on the same over-surveilled and often racially targeted communities. So what you’ve done is actually a type of tech-washing where people who use these systems assume that they are somehow more neutral or objective, but in actual fact they have ingrained a form of unconstitutionality or illegality. (Hao 2019)

The conclusion is simple: artificial intelligence doesn’t relieve us from responsibility. We can use it in wonderful ways, but thinking that putting a program to make decisions in our place about healthcare insurances or whose door the police has to knock to all problems will be solved is just faith — and a very misplaced kind of it.

References

Courtland, Rachel. 2018. ‘Bias Detectives: The Researchers Striving to Make Algorithms Fair’. Nature 558: 357–60.

Douglas Heaven, Will. 2020. ‘Predictive Policing Algorithms Are Racist. They Need to Be Dismantled.’ MIT Technology Review. https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/.

Floridi, Luciano. 2016. ‘True AI Is Both Logically Possible and Utterly Implausible’. Aeon. 2016. https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible.

Hao, Karen. 2019. ‘Police across the US Are Training Crime-Predicting AIs on Falsified Data’. MIT Technology Review. https://www.technologyreview.com/2019/02/13/137444/predictive-policing-algorithms-ai-crime-dirty-data/.

Ledford, Heidi. 2019. ‘Millions of Black People Affected by Racial Bias in Health-Care Algorithms’. Nature 574 (7780): 608–9.

Searle, John. 1999. ‘Chinese Room Argument’. In The MIT Encyclopedia of the Cognitive Sciences. Cambridge, Massachusetts: MIT Press.

--

--