Should the car kill the Dog or the Grandma? — The Trolley Problem

Abhinav Tripathy
AT Blog
Published in
8 min readJun 22, 2019
Sebastian Thrun Ted Talk (Source)

While casually browsing the internet, I came across this TED talk by Sebastian Thrun who talked about his experience of losing his best friend to a car accident which led him to dedicate his life to developing self driving cars. As I reflected on his story, I related to him as I lost my grandfather to a car accident. Thinking about my grandfather fueled my enthusiasm to explore the technology of self driving cars and how “safer” technology could have possibly saved him. Exploring into self driving cars I realized that ethics in self driving cars is a big concern. While thinking about this, I realized it’s hard to imagine a scenario where we teach machines about moral code and ethics. This sparked a question in my mind of looking at ethics in AI and specifically self driving cars. Looking deeper into it I realized how big of a debate ethics in AI had become. Hence it became quite clear to me that I would carry out some exploration on ethics in AI. My research journey started one fundamental question:

Should a car drive on its own?

This question would have been a clear “no” a decade ago, but now its a “maybe”. From the current trajectory it seems like, in the next few years it’s going to be a clear “yes”. It is hard to fathom how much technological development has happened in just 10 years. This is largely due to the developments in Artificial Intelligence which has led to a rise in self driving cars. The progress is seen through the fact that Google’s Waymo launched a public trial in Arizona of their self driving cars. But taking a step, one must realize why there is a conversation of self driving cars and why it is relevant.

“Self driving cars are the natural extension of active safety and obviously something we should do” — Elon Musk

The idea of self driving cars become crucial in the context of making driving more safe, more reliable, convenient, reusable and friendly. The idea of making the car “safe” begs the question of making important decisions on the road. When we think about decisions there are always those kind where trade offs have to made. For example, would a car kill the pedestrians in favor to save the passenger or would it be the other way around? These ethical questions are something that is beyond the comprehension of a machine and there is where the question of ethics in AI rises. This paper explores ethics in AI, specifically looking at the trolley problem.

The trolley problem is a thought experiment in ethics that presents a situation where there is a runaway train and the track splits into two ways. On one track there is 5 people, and the other track there is 1 person. You as a person has a choice to pull a lever and change where the train goes. Now the question is whether you would pull the lever and kill the one person or not do make a move and let the train kill 5 people. There are various arguments as to which option one can choose but this is a social dilemma which has no definite answer. The trolley problem gets even more complex once there are more variables introduced into the equation such as age, ethnicity and relationship. For example what if that one person on the track was a relative of yours, would you save one relative and kill five people? The trolley problem applied to self driving cars and asks what trade offs can be made in a situation where a self driving car loses control but can still control some operations of the car. A specific scenario is when the brakes stop working but the car still has some controls. In a case where it has to either crash into the pedestrians or cause risk to the passenger, what should it do?

A Visual Map of LIDARa and Cameras (Source)

To fully understand why ethics in AI/self driving cars is a difficult concept one must get an understanding what technologies power the self driving cars, how they work and why the element of ethics is hard to be integrated with it. Talking about the technologies behind self driving cars, there are various sensors that act as input data. These sensors are cameras, LiDARs(Light detection and ranging). The processing is done through an Artificial Intelligence algorithm called deep learning. Deep Learning forms the crux the powerful “mind” behind self driving cars.

“Deep learning is the new electricity” — Andrew Ng

Through deep learning, the self driving car is able to understand and process the locations of other vehicles, path to be chosen and where the car should go and how it should get there. In AI algorithms and deep learning, there is a lot of mathematics involved as it is a way of calculating probabilities underneath the hood. Hence, inducing moral code and ethical practices in a machine can get extremely hard as those are not quantifiable. Hence the question of adding ethics to self driving cars and AI becomes a massive challenge.

A big research project called the Moral Machine created by the MIT faculty, headed by Iyah Rahwan is specifically researching on the trolley problem. In a TED talk, he talks eloquently about heading the Moral Machine experiment at MIT and it was an important topic for self driving cars. The experiment included the Moral Machine team creating an online survey where they asked participants who would rather save in a situation resembling the trolley problem for self driving cars. They varied the age and even the species of the pedestrians to really find what people care about across countries and cultures. He talks about how some people in the whole survey chose to save a dog more than a baby on a stroller. He goes onto talk about how people want to do good for the whole human race by saving the maximum number of people but don’t want to put themselves to harm and that puts the whole equation in a complicated manner as they are conflicting. As the situation presents it, one has to comprise themselves (as a passenger) to save the pedestrians. Though people voted for this option the most, they went on to say that they would never buy a car that would put them as a passenger in harm.

These observations from the Moral Machine experiment lead to two major ethical theories proposed by two philosophers. The two theories are inspired by two philosophers: Jeremy Bentham and Immanuel Kant. Bentham followed the utilitarianism approach and stated that the car should minimize total harm even if that means it puts the passenger or the bystander at risk. Kant stated that the car should follow “duty bound principles” which means the car should not explicitly make a move that tries to kill a human being so the it should take its course. When people were asked as part of the Moral Machine experiment who they agree with, most people agreed with Bentham argument. But the same people as mentioned before said that they wouldn’t buy such cars as they would like to protect themselves at all costs as the whole argument of having self driving cars is “safety”.

To further understand how ethics perhaps applies to machines, one needs to revisit how rules of machine behavior were laid out. Isaac Asimov- who is considered as one of the fathers of robotics- laid out his three laws of robotics which talk about what the three main rules of robot/machine behavior should look like. His three laws include:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

After a few decades, seeing all the ethical dilemmas, Asimov added the zeroth law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm. From the statement, saving humanity perhaps does imply a utilitarian approach but it’s hard to imagine what he truly meant by his zeroth law and how it would apply to today’s self driving cars. While Asimov may not have left a definite explanation to his laws, his change in laws does hint to the fact that ethics is becoming an integral part of machines and how they should function.

Ethics at the heart of it is all about weighing the risks and minimizing the risk element. While the Moral Machine may have some early representations of societal opinions, the debate is far from over. Looking about what probably should be done, the utilitarian approach might be a versatile approach for a lot of laws in society however, in this case it may not viable. With a utilitarian approach one is going to end up with people not adopting the technology which defies the purpose of having self driving cars. These cars are built for the masses and are for their support. With utilitarianism there will be bigger problem in the form of adoption of safer technology which is important. Though the question of ethics in AI is an ongoing debate, perhaps the answer lies in Immanuel Kant’s approach but not entirely. Building a perspective in an ethical dilemma can be extremely difficult but one must first consider the stakeholders. The important point here is that each of the stakeholders, the car manufacturer, the society as a whole and the passenger should have an equal say in the whole equation. One must note that the passenger maybe part of society however, when the person is inside a car as a passenger, the whole dimension changes and they might have a different perspective to things and they must be considered independently. Keeping this mind an algorithm needs to be designed that takes into account of the individual stakeholders and makes a decision. Hence, the best option could be having a tweaked form of Kant’s approach which means let the car eventually do what happens in the situation however, tweak it in a manner in which there is some say of each stakeholder specifically when deciding probabilities of who to save.

To conclude, we have changed from calling machines “it” to “them”. We have made a huge strides in “humanizing” machines but incorporating ethics/morality is something machines have still a long way to go. Even though there is a long way to go, ethics in self driving cars is an important topic as a machine is deciding between life or death in a fraction of a second. To perhaps begin solving it, we all need to come together as a society and contribute to the conversation of ethics in self driving cars as policies and government regulations are a just a reflection of societal values.

Originally Written as a Research Paper in December 2018(Source).

--

--

Abhinav Tripathy
AT Blog
Editor for

Web Developer, AI Enthusiast, Student @ UMass Amherst