Who should the self-driving car kill?

The most famous AI moral dilemma

Luisa Simone
Curated Newsletters
4 min readApr 24, 2021

--

The moral issue

When the idea of an autonomous car was born, it was immediately followed by the one which is still the most discussed philosophical dilemma about artificial intelligence: if forced to choose, who should the car save and who should it kill in a crash? The old woman or the driver? The dog or the man? The child or the group of friends? Who should make this choice and who is to blame for the death? The car can’t obviously be arrested or punished, so should the programmer be punished instead? And what about if there was no other way?

Photo by julientromeur on Pixabay

The “Moral Machine”

The debate is so intense that a group of researchers formed by the social scientist Iyad Rahwan, the behavioural researcher Jean-Francois Bonnefon and the social psychologist Azim Shariff designed an online platform called “Moral Machine”, which generates casual moral dilemmas and asks you to judge the outcome you think is the lesser of two evils, in order to study people decisions. The goal is to get a clearer understanding of how humans make such choices, but also to study the human perspective on the moral decisions the machines have to make. The data collected showed differences among countries and correlations between preferences and some national metrics.

www.moralmachine.net

Thomas Aquinas and the Principle of Double Effect

The dilemma is much older than artificial intelligence. It finds its roots in Thomas Aquinas, an Italian philosopher who lived in the 13th century! He was a theologian, so he applied it above all to Christian matters. He called it the “Principle of Double Effect”: the idea is that there is a morally relevant difference between a bad act made to do harm, and the same one made with a legitimate intention. In most cases, an action is not merely right or wrong, but it has a series of consequences, with different grades of fairness.

The key is: was there an alternative choice?

The trolley problem

In 1967 the philosopher Philippa Ruth Foot published an article with the title “The Problem of Abortion and the Doctrine of the Double Effect”, where she turned the question into a moral dilemma, marking the birth of the famous trolley problem: the driver of a tram leads it with the only chance to change rail, through a switch, not being able to slow down or stop; he realizes that the tram is going towards a railway where there are five people tied, but on the other rail, where he could go activating the switch, there is only one. Should the driver let the tram continue the race, killing five people, or activate the switch and kill only one?

The dilemma had its various versions, all with the goal of making the choice harder. In the most famous, called “the fat man”, the only way to avoid the death of the five people tied is to throw on the railway a fat man, who would die but would stop the race of the trolley. The psychologist Joshua Greene proved that for many people it is a moral choice to redirect the trolley toward the only person tied, but to throw the fat man on the railway is homicide; with a brain scan, he even identified two different areas of the brain which are activated depending on the case.

Photo by McGeddon on Wikipedia

The solution

This philosophical problem was not born to demand that there is a right or a wrong answer, but precisely to highlight that a man could be brought to situations where negative consequences cannot be avoided. The self-driving car which finds itself in a similar situation should simply do everything possible to avoid damages: slow down, shift aside, do anything in order to do damage control. If still someone dies it’s a tragedy, no matter if the car had a driver or was guided by artificial intelligence, because neither of them would have been able to prevent it.

Once again, we are asking machines to do what not even we can do; since they learn from us, they’ll never know how to do something if we don’t know how to do it. We are trying to force them to make a choice we wonder about since the days of Thomas Aquinas, but we will never find an answer for that: basically, it is posed in the wrong way. We can’t determine who should die in a self-driving car crash, but we should spend all our energies to avoid the tragedy, exactly how we do now, still driving cars on our own.

Bibliography

  1. Iyad Rahwan, Jean-Francois Bonnefon, Azim Shariff, Moral Machine, www.moralmachine.net
  2. Philippa R. Foot, The Problem of Abortion and the Doctrine of the Double Effect, in Oxford Review, Oxford University Press
  3. Joshua Greene, Moral Cognition, www.joshua-greene.net

--

--

Luisa Simone
Curated Newsletters

I stepped out of my comfort zone, where I was studying Philosophy, to study AI. Now I’m a kind of hybrid passionate about Philosophy of Science and Technology.