Can the driverless car be ethically programmed? Only if the engineers team up with the philosophers.
—Niclas N. Hundahl, MA in Modern Culture and Cultural Communication, University of Copenhagen, Denmark.
During the panel discussion of The New Human Symposium, the question of the driverless car was brought up. The general response was negative for a variety of reasons, like surveillance, the loss of human agency and the technology potentially malfunctioning. The last point involves a long range of ethical problems, because if a driverless car crashes and injures or kills somebody, who is to blame? Can we accept giving technology such responsibility over our lives?
The interesting question here is why the emphasis is placed on the driverless car potentially malfunctioning, when it is a fact that non-driverless cars are also susceptible to malfunction. Cars of all sorts can break down and potentially represent a danger to pedestrians or other drivers. And it doesn’t end here, for malfunctioning is not exclusively of the technological domain; human beings can malfunction as well. A variety of factors can impede human judgment while driving a car: intoxication, emotions, injury, stress — to mention a few — and all of them to potentially fatal end.
According to a report from the World Health Organization (WHO), annually 1,247,021 people die as a result of ‘road use’ and out of those deaths, pedestrians account for 274,345. That is, more than a quarter million people die every year as a result of proximity to the road. And that is not counting the tens of millions of injuries that are of the non-fatal kind. It seems that human beings might not be the best suited candidates for driving cars. With digital technology, the human driver is being challenged. The driverless car isn’t driverless, it just doesn’t have an external driver — it has an internal driver. This means that while an ordinary car needs a human driver, the ‘driverless’ car can drive itself. And several advantages come with this.
The first advantage is one of attention. A computer needs to concern itself with driving, and driving only. It doesn’t have to worry about the phone ringing, the kids on the backseat misbehaving, or swearing at the other driver who cut in in front of it. If we can program it in the right way, and have the hardware to support it, the computer will always be the best possible driver. The second advantage is one of information: If every car has an internal computer and is linked to a central database or monitoring system, every car knows the location of every other car and can position itself accordingly. There will be no more surprises, no unexpected cars cutting in on you, and — most importantly — head-on collisions would in theory become nigh impossible.
Of course, this is at the moment merely optimistic speculation. As the panel at the New Human Symposium brought up, we do need to be aware of the questions of surveillance, of giving up human agency and of not repeating old-fashioned consumerism. But rather than reacting with fear, and shying away from the driverless car — both metaphorically and practically — we should rather insert ourselves into the process of designing this car. As humanities scholars, we are well suited for attempting to tackle some of the ethical problems arising with having driverless cars, for example, how it should react in certain situations and what the relationship between the program and the human passengers should be.
One of the most pressing dilemmas relates to that of algorithmic morality: If forced into a situation where a fatal collision is impossible, how should the cars’ programming act? Should it steer into a wall, killing its human driver but not hitting anybody else, or hit another car or pedestrian, saving its occupant, but killing a bystander? What if there are five people in a car, set on a course to hit a car with only one person in it? Should the car ever consider the age, health, education, criminal record, or life expectancy of the people involved in the upcoming crash? These and many other variables might or might not be programmed into the algorithm, but somebody has to decide exactly how the car shall evaluate a situation where the loss of human life is unavoidable. The driverless car should, paradoxically as it may sound, have the license to kill.
These paradoxes and problems are not new, at least not to philosophers. The only new thing is that they this time not only concern human-to-human interaction, but human-to-machine interaction. And that this time, the discussions will not remain in a theoretical realm, but be implemented into active law-making and algorithmic design, because we might actually be able to control exactly how one of the actors will behave. The discussion on how we program these cars cannot only take place between politicians, engineers and lawyers — philosophers and cultural scholars have knowledge and experience with these questions that is too crucial to omit.
When we attempt to think about a new human, we must always consider the new technology as well. As we invent new technology, our societies change alongside it, and we must therefore always engage with the new possibilities, and consider how they are changing our world.
Niclas N. Hundahl is MA in Modern Culture and Cultural Communication, University of Copenhagen, Denmark. His main research interests include posthumanism, technology, prostheses, aesthetics and the intersection between popular culture and academia.