Should a car be able to decide if we live or die?

Self-driving cars are a more and more relevant topic these days. Not only because technology allows it today, but also because a lot of questions come up as soon as you think about it in detail.

The idea is simple: the car is driving itself. The car decides how to act in certain situations, when to overtake another car, when to start and when to stop. But there are decisions more complicated than stopping at a red light: Dangerous situations. Analysing data is something computers do a lot better than humans. Large amounts of data can be analysed and processed way faster than a human being could ever do. But how do we teach a machine to react to a situation in a manner that is “human”? Should machines even react in a way that is similar to most humans or should they rather think in numbers, react based upon data they have collected?

A common approach is to look at the way humans would decide in such a situation. It’s simple, you ask a bunch of people what they think is the best reaction in a certain situation, for example an unavoidable crash, and expect this solution to be the most socially accepted and therefore the most successful decision. But automated cars offer us a whole new world of possibilities for enhancing behaviour while driving. They could allow cars to communicate with each other, check their status and check who’s sitting in the car. This is information that could be used, but it is sensitive information. If two cars are facing an unavoidable crash, should it be relevant who is sitting in the individual cars? Should maybe only the amount of people be relevant? Should their life expectancy be relevant?

I can still remember a scene from the movie “I, Robot”, where an accident takes place in which a robot decides to rescue an adult rather than a child based on calculations, that it is more likely for the adult to survive than the child. This has an enormous impact on the life of the adult, as he feels guilty for being responsible for the death of a little girl. He doesn’t feel he deserves the right to live, he would have rather died in the accident, even if it just improved the chances of the little girl surviving by only a small amount.

Is this a “human” decision he would have made in this situation if he would have had a choice? If so, it would mean that if the robot would have made the same decision, it would have been the correct one. But I strongly believe that by far not all humans would have made such an idealistic decision as it is portrayed in this movie. A lot of people, whether they admit it or not, would have wanted themselves to be rescued, and they should not be judged for this. It is obvious, that in a dangerous, life threatening situation most humans would pick themselves over some other person they have never met. And here the opportunities machines give us can be found. They don’t consider thereselves most important. Isaac Asimov’s robotic laws state that a robot must protect himself, but only if he does not harm a human by doing so. In these situations a robot can therefore be seen as a neutral entity, and in this position could be the best judge possible.

As one can easily see, the parameters a car can and should use for computing a solution are still a huge question mark and they probably won’t be completely resolved in the next couple of years. They are highly dependent on the information that is available to the machine. For example, if less information is available, let’s say the other car is a “regular” car, providing no information about the people sitting inside, the self-driving car is restricted in it’s ability to react. But in the future, with more and more information being available, it should be the case, that machines make decisions for us, when we are not capable of making the best decision for not only ourselves but our fellow citizens aswell in a specific situation.