Should your autonomous car protect you? And at what cost? (AI + Kantian Ethics)

Truman Halladay
10 min readJan 25, 2018

--

This is an essay on the moral ethics of driverless vehicles. My interest was ignited from this TED Talk by Iyad Rahwan and his accompanying website and research at MIT. You should for sure check it out!

Driverless cars are on the rise. In the near future, roads will be dominated by fully autonomous vehicles. Most people welcome the change; it is a safer, easier and cheaper solution. But what happens when there is failure? Who is the car’s priority?

Before this technology becomes fully integrated into our world we have to discuss and decide the ethical laws these cars will follow.

Cars fail, and even more so, software fails. We can expect that as this technology is implemented it will not be perfect. When things go wrong the car needs to make a decision, because there is no driver. Those decisions need to be based off some code of ethics. So, what ethical system then?

Immanuel Kant believes that moral law is absolute and created The Categorical Imperative. Kant believes that one cannot kill, therefore driverless cars could never choose to kill. I believe that if driverless cars followed the Categorical Imperative they would be safer, more controlled, and more predictable.

We have been developing AI to become smarter and more useful for us; A.I. can now accomplish extremely difficult tasks on a human like level. Many fear artificial intelligence, while many others are ushering it in as fast as they can; but both parties know it is coming. Although the technology is here and capable, the people are not ready for the jump to fully autonomous cars. This transition is a grandiose step. To say that is one large step for mankind, would be a drastic understatement. Harry Surden and Mary-Anne Williams from the University of Colorado stated in: “Technological Opacity, Predictability, and Self-Driving Cars” that,

“Today people share physical spaces either with machines that have free range of movement, but are controlled by people (e.g. automobiles) or with machines that are controlled by computers, but highly constrained in their range of movement (e.g. elevators).”(Surden-Williams 121)

This new technology will be a huge leap for our world. I believe that we need to tread carefully and make this step precise.

It is clear that driverless cars are the future, but what choices are we going to have to make to create these autonomous machines safe, predictable, and controlled? What happens when the car experiences failure? Who is the car’s priority? How do we come to those decisions? These are the hard questions we are faced with. There has to be a moral standard that will be installed into these cars. I believe that Kantian ethics are the best solution to this issue.

Kantian Ethics

Immanuel Kant’s moral system is built around the Categorical Imperative. The Categorical Imperative is a moral code that is universal and absolute. These Imperatives construct moral ethics or as Kant calls them, “Commands of Morality”. According to him they must be applied to all people, all circumstances and all occasions. Decisions don’t change depending on the situation. Categorical Imperatives are

“not concerned with the matter of the action and its intended result, but rather with the form of the action and the principle from which it follows…” (Kant 416).

This is a key characteristic of Kant’s Imperative; he believes that true moral ethics do not depend on the end result, rather, only on the principle guiding that action. Kant believed that morals were absolute and testable; the right thing to do can be decided by inputting the action or maxim into an equation.

Kant provided three formulas to analyze all moral action. These formulas are evident that Kant saw morals as universal laws that are always upheld. Kant, in “The Grounding for the Metaphysics of Morals”, states that the first formula is this,

“Act only according to that maxim whereby you can at the same time will that it should become a universal law.” (Kant 421).

Essentially, if I am not okay with the world doing what I am going to do, then it is not moral. Take stealing for example. If you are thinking about stealing, according to Kant you have to consider what would happen if everyone was ok with stealing. This is contradictory, someone will just steal from you what you have just stolen, therefore stealing is not moral.

Secondly, Kant says that you should never treat someone as solely means to an end or a result. If you are going to steal from someone, you are using them to get that item you are stealing. Lastly, Kant respects human autonomy by saying that all rational action must be willed, but also willed freely by everyone. I might steal and think it’s ok — however, if I do, someone will probably not agree with that, and I could ultimately strip them of their autonomy. These formulas lay down an objective moral law that respects and empowers autonomy. Stealing is a rather simple example to test Kant’s equation, but what about when human life is at stake? This is the severity and complexity of the driverless car issue.

Implementation

Iyad Rahwan, a professor at MIT Media Lab, gave a TED Talk concerning the ethics and morals of driverless cars. He shares some statistics about car accidents stating:

“…Last year 35,000 people died from traffic crashes in the US alone. Worldwide, 1.2 million people die every year in traffic accidents. If there was a way we could eliminate 90 percent of those accidents, would you support it?Of course you would. This is what driverless car technology promises to achieve by eliminating the main source of accidents — human error.” (Rahwan :12).

Driverless cars will drastically decrease the number of accidents worldwide, but they will not be perfect. If a driverless car is unable to stop, it is going to have to decide who is going to get more injured. Does it swerve and hit a single pedestrian rather than a group? Are the passengers the highest priority or the least? These scenarios are very simple, but they are decisions that need to be made ... by us. Rahwan says,

“It’s going to be a more complex calculation, but it’s still going to involve trade-offs, and trade-offs often require ethics.”

Without a human behind the wheel and in control, the car has to make a quick, calculated decisions based on some code it is programmed to follow. That code has to have reasoning and laws, ethics.

There are cars that are legally on the road today that have autonomous driving features, but none that are fully autonomous. This will be the first time that very mobile, public, fully autonomous machines will be integrated into our lifestyle.

In the 1940s, Isaac Asimov wrote the famous first laws of robotics.

“A robot may not harm a human being, a robot may not disobey a human being, and a robot may not allow itself to come to harm”.

Asimov wrote those laws decades ago but many people still believe and trust that they apply today. Driverless cars are part car, part robot, part computer; they should follow Asimov’s first and most important law, don’t harm a human being. In any other ethical system other than Kant’s, the car would have to choose on which humans it will harm.

Dangers of Artificial Intelligence.

The uprising of AI and robots is a scary thing to many, and if we actually program our cars to make decisions on who to protect and who to injure, that will only strike more fear. We are just at the start of all this new tech and there are skeptics. Rightfully so as well, there have been issues with artificial intelligence already. Facebook had to shut down two of their AI agents because they began to have a conversation that we did not understand. Tony Bradley from Forbes.com said,

“We need to closely monitor and understand the self-perpetuating evolution of an artificial intelligence, and always maintain some means of disabling it or shutting it down. If the AI is communicating using a language that only the AI knows, we may not even be able to determine why or how it does what it does, and that might not work out well for mankind.” (Bradley)

These agents were diverting from normal English and began to “think” on their own. These robots are learning and I don’t think it is a good idea to be telling robots to kill anyone. Utilitarian ethics would require us to tell these cars to kill the least amount of people in a failure. Another ethical system might choose to always protect the passengers. Those results sound nice, but in reality the machine is choosing to kill a certain group of people, while saving another.

Artificial Intelligence works much differently than a human brain. What if there is a mistake? What if the car software gets a bug and decides it’s just going to run people over? Bottom line is this: telling a learning machine to make certain decisions that will result in death is a frightening idea. With Kant’s moral system in place, the car could never choose to kill someone, rather it would keep its course. It is a modern case of the famous philosophical scenario of the runaway trolley car.

The Runaway Trolley Scenario of 2018

The runaway trolley scenario is a scene that has been played out many times by philosophers. Essentially, you have a trolley and its brakes are shot. It is heading straight and on its way to kill five unsuspecting workers. You are standing by a switch, that shifts the track and results in killing only one person. What do you do? Most people say they would pull the switch and kill that one person. It is Utilitarian to save as many lives as you can, but you still have made the clear choice to end a life. What if you were on a bridge, there wasn’t a switch, but a man instead. If you push him off, he will die but stop the train from killing five others. Sounds gnarly huh? But it is the same life, and your same choice you made. Kant believes that you cannot pull that switch because you are choosing to killing someone and then that means that you believe its moral that anyone can choose to kill a human.

If driverless cars are programmed to make utilitarian decisions, the car will always save as many lives as possible. Sounds good on paper, but I want to dissect what this really means.

A family of three is riding in a driverless car and the brakes go out. The car has to make a choice of what it will do and where it will crash. Let’s say there are three options: It can continue its course and run over two pedestrians (Kant’s Choice); it can swerve and only kill one pedestrian; it can run itself into a barrier, killing the family of three. If the car was programmed by Jeremy Bentham, the car would asses the situation and choose to swerve and kill the one pedestrian, because it is the smallest net loss of life. I believe this is not a great solution for several reasons. Primarily, we lose human predictability. Autonomous cars would not follow any predictable action, they will always observe, assess and decide on the least loss of life. This means that speeding cars could be directing themselves in all sorts of directions with no distinguishable pattern. As humans, we like patterns and we have this connected ability to predict human reaction. Williams and Surden comment about this saying,

“ Theory of mind cognitive mechanisms allow us to extrapolate from our own internal mental states in order to estimate what others are thinking or likely to do. These cognitive systems allow us to make instantaneous, unconscious judgments about the likely actions of people around us.” (122)

If we give out driverless cars an imperative, “Minimize lives lost at all costs.”

It will always be followed.

Human minds don’t think that way, we are emotional beings and chances are we can’t see all that is going on around us like the car. Pedestrians won’t be able to predict any behavior from the car and thus decreasing their chance of escaping the oncoming vehicle.

We are talking about absolutes here, like the trolley problem, but I do believe it is worth mentioning the pedestrians ability to avoid death. If the car continues its course, people will have a higher chance of getting out of the way because its more predictable. A Kantian driverless vehicle is easier to predict and will never chase after people. Williams and Surden claim that,

“Law creates incentives to reduce harm, society also implicitly relies on such cognitive-social mechanisms to avoid injuries that might otherwise occur as people and vehicles move about in the same physical space.” (124).

Our near future is going to be a lot safer with these autonomous cars. Crashes around the world will decrease drastically and people will be greater protected. When these cars do have a failure, decisions need to made and those decisions will always have trade-offs. I believe that Kant’s Imperative is the best solution for this ethical dilemma. I vote to not program our cars to kill, and believe that is the safest option for our future. Whether you agree or not, this is something we will need to discuss and decide on in the future, so thanks for reading.

Thanks for taking the time to read this! I am not an expert in any of these areas but wanted to share my thoughts! Much Love.

--

--

Truman Halladay

I am fond of Design, Sci-Fi, Philosophy, and Chips and Salsa.