Vehicles with morality?

Giancarlo Colmenares
Bullshit.IST
Published in
4 min readSep 19, 2016
Stanford Professor Chris Gerdes explains the complexities of programming moral decisions in self-driving cars [1]

Machines will always be machines, or not? Technical problems of building an autonomous vehicle are being rapidly resolved, right now teaching a computer how to maneuver the steering wheel or how to apply the brakes is an “easy” task. However, we all know that computers don’t always behave as we want, sometimes they go crazy, sometimes they freeze or just ask us to stop everything to install a critical update. What will happen with an autonomous vehicle in such cases? Moreover, who is responsible in an AV crash? The person in the other car or the AV? What if there were two AVs crashing? Simple questions before going deeper into that… it looks like this will be a post full of questions!

Until manufacturers and their engineers produce infallible autonomous vehicles, we must find the way to introduce two characteristics that have been always associated only to humans: common sense and morality. I want to write today about the second one.

In an ideal world, every car goes on its lane, it brakes when it should brake, it uses the turning lights and goes below the speed limit. Sadly, we are not in the ideal world, and the problem is no longer how a vehicle can drive by itself, but how should it react to unforeseen events in the street. Specially, those that could affect the lives of the occupants or that of pedestrians. Then, what should do an autonomous car when a child suddenly appears in front of it? Evade the child and crash with the car on the next lane? Crash itself with the wall on the other side? Fully apply the brakes, minimizing the damage on the kid? I wouldn’t want to be the one to answer those questions. Any possible avoidance maneuver could produce even more danger!

The point is, we as humans act by instinct, it’s just a reaction, and we would probably steer away from the kid without considering what else could happen. But if it was not a kid but an adult? Or several? Or an elderly? What if what you find in front it’s a hole in the street and the only solution to save yourself is to drive the car to the next lane crashing other cars in your way? As computers do whatever we program them to do, then we must come up with an idea of how an AV would behave in such cases.

Certainly, morality cannot be programmed (can it?), since it’s not just a matter of following some rules and executing one action or another depending on what’s going on in the surrounding environment. Perhaps, machines can learn it! Take a look at this page of the MIT:

They show you some morality dilemmas and want to know what YOU judge as the most acceptable outcome in those situations; they even let you design your own psycho-twisted scenarios.

Moving forward, other possibilities should be considered. Maybe, a solution is to minimize the number of deaths, or minimize the probability of death of the occupants, or minimize the probability of damaging others. Such optimization function should have also priority or weights considerations in order to make a decision. We could think here about the three laws of robotics [2]:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Our robot, the AV, will have a hard time trying to solve these rules when facing a dead end scenario, in which every possibility will produce a harmed human being.

Perhaps, the best approach is to let the human decide what to do in such cases. The problem there is that probably there won’t be enough time to make a decision. Or even worst, the human could be undecided, and give contradictory orders to the car like: “go left”, “no, go right”, “no, no, go left”, “stoooop” … This reminded me the joke of the preacher and his horse:

A preacher trained his horse to go when he said “Thank God” and to stop when he said “Amen”. The preacher mounted the horse and said “Thank God” and went for a ride. When he wanted to stop for lunch, he said “Amen”. He took off again saying “Thank God”.

The horse started going toward the edge of a cliff. The preacher got exited and said “whoa! whoa!” Then he remembered and said “Amen” and the horse stopped at the edge of the cliff. The preacher was so relieved and grateful that he looked up to heaven and said “Thank God!”

If we as humans, are not sure on what’s the best decision to make in the discussed scenarios, then we don’t have yet the means to teach this behavior to vehicles. Do we really want cars learning from our experience in these cases? Before passing this knowledge to vehicles, I think we would probably need a critical update.

[1] https://www.cnet.com/news/self-driving-car-advocates-tangle-with-messy-morality/

[2] Asimov, Isaac (1950). I, Robot.

--

--