Driverless Cars & Moral Philosophy

Note: 80% of this article was written before someone was killed by an Uber driverless car, however, this event perfectly proves that the issues discussed below are very real.

There is a well-known thought experiment in moral philosophy called the trolley problem. It was intended to challenge the ethical theory known as Utilitarianism, which states that:

“it is the greatest happiness of the greatest number that is the measure of right and wrong”.

In recent years, with the development of AI, what started as a purely intellectual problem has become a perfect example of precisely why philosophy, ethics, and morals are more important than ever. Furthermore, it serves to highlight that we cannot simply place our faith in machines, at the forefront of developing technologies we need humans, and those humans need to be educated in the ethical and political impact of these developments with direct accountability to wider society.

The Trolley Problem

Created in its current form by Philippa Foot in 1967, the most basic example of this problem is simple: there is a tram racing out of control along a track, a track which has five people tied to it, as it stands the tram is going to crush and/or slice those five people beneath is wheels. Fortunately, you notice the event unfolding and see that you are able to pull a lever which will divert the train onto a separate track. Unfortunately, the second track also has a person tied to it, but only one. Would you pull the lever?

The obvious utilitarian answer is that of course, you would pull the lever as the death of one person has to be less bad than the deaths of 5 people. In my experience discussing this, most people agree.

Let us re-frame the question slightly. In what is traditionally called the ‘fat man’ variant, it is posited that this time there is only a single track and you find yourself on bridge stood with someone of a sufficient mass that if you pushed them off the bridge they would block the tram before it killed the 5 people, but the person you pushed would die from the fall. Would you push them?

From my experience, people are far less likely to agree. Despite the fact that the number of lives being saved and lost is the same, people seem more willing to pull a lever that will lead to someone’s death than to directly push someone to their death. This in itself is an alarming conclusion in a world where warfare is becoming more and more remote.

A final variation is known as the ‘Fat Villain’. The situation is the same as before, except that this time the person stood next to you is, in fact, the evil villain that has tied the five people to the tracks to begin with. Would you push them?

From experience, most people are willing to push the villain. Why? They deserve it. But you don’t agree with the death penalty, surely? No, but this will actually save the victims. But the last person would have saved them and you wouldn’t push them?! Hmmm…..

The problem is only complicated further when you begin to consider who the people are. What if the five on track one are all nonagenarians but the one on the second track is 10-year-old child? What if the five people were criminals? What if the one is on the verge of discovering a cure to cancer?

Why should we care about this?

Skip forward to 2018 where we quite literally have cars driven around our streets by computers and armed drones flying over our heads. We can no longer give up in exasperation safe in the knowledge that we will never have to make such a decision. A computer cannot cross that bridge when it comes to it, it must have its instructions in advance.

The disproportionate media coverage of crimes committed against certain social groups, ‘missing white woman syndrome’, is well established, so it is simple to imagine a direct motive for a Google, Uber or Apple car to make a certain choice if put in a position of favouring one person’s safety over another. We also have examples of algorithms being social discriminative by accident because humans had not fully thought through the effects of its implementation. Many western countries have already applied the use of facial recognition technologies to spot criminals in a crowd. China has recently established a method of socially ranking individuals. All the pieces are there for a car to be choosing the life of one person over another.

The change in situation is the point at which the decision is made. Pre-meditated murder is treated significantly differently in law than manslaughter. In most cases, if someone were to swerve to avoid one car and ended up accidentally killing someone, they would be responsible for that death but would not be found culpable and would not be punished. In truth, morally subjective decisions are made all the time, but the intention is that above a certain level of seriousness they are made by judges and juries, within the confines of legislation set by a democratically elected parliament, and that these retrospective judgments will inform the future actions of others.

But now these decisions are being pre-programmed there is the very real possibility of making difficult moral choices based on hypothetical scenarios and then applying the results if and when a suitably similar scenario arises. There becomes a serious question as to who is making these decisions? What system of checks and balances is in place for the decision making process? Who is culpable for their results? An argument can be made that changes to the variables of certain algorithms should be passed through third-party committees, potentially even through parliament. This notion of decisions being shifted into algorithms has been referred to as ‘maths washing’, and has much broader, far-reaching effects. The generalised problem is that removing decision making powers from humans can easily appear to have the effect of making that decision impartial as it is now being taken by a computer, by a codified set of rules. The problem is that the set of rules was created by a human so in reality, we are just shifting the decision making power to a smaller number of people, probably people that are in no way impartial. This has been happening in finance for years, it is happening in our social media streams which are in turn affecting election outcomes, it happens in the primacy of google search results which affects our whole system of ideas.

Death by driverless car is an extreme example, I actually believe that the number of vehicular deaths will be significantly reduced by the advent of driverless cars, but it illustrates that the decision making power of algorithms is real and current. To be clear, I do not believe that this inevitably leads to a dystopian nightmare. There is a wonderful opportunity to consider, codify and regulate moral decisions in a way that has not been possible before; but these decisions should be made as a society, not by a handful of tech companies.

In a report by the German Ethics Commission, a list of criteria for driverless cars is proposed, their solution is as follows:

In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited.

Our society is full of examples of decisions being made based on personal features. OAPs get discounts, as do children, children’s charities receive more donations that homeless charities, men earn more than women, mental constitution is taken into consideration when sentencing for a crime. Ruling out distinguishing between individuals would be a simple solution to a very difficult problem, but not necessarily the best. There are situations in which we may want an algorithm to treat demographics or individuals differently, there are situations in which we could employ an algorithm specifically to reduce discrimination. There may be situations where extremely difficult moralistic decisions need to be made, but that either option is still better than a random selection. The point is that we should not shy away from these decisions, or pretend that they are not being made, or not significant. But they must be made out in the open where everyone can understand the hows and whys of them and have the opportunity of an input. This may mean slowing down the rapid development of some tech, or governments investing more money to keep up, but they are more than acceptable costs of avoiding a world where private businesses can co-opt the reigns of morality.