The problem with solving the driverless “trolley problem”

A trolley driver is barreling at uncontrollable speeds towards a group of lost mental patients. The driver must choose between hitting them and diverting the trolley towards a track where a small baby has taken up residence. What should the driver do?

Many equally fantastic trolley decisions have been thought up. Each problem always forces the driver to choose which person or people are more deserving of life. Today humans are looking at replacing the trolley driver with a computer capable of recognizing situations and making decisions at amazing speeds. So, who tells the computer how to make decisions like this? In recent times, it has become almost impossible not to bring up trolley problems when discussing the future of Autonomous Vehicles (Autos). These discussions always ignore the technical realities Autos are facing.

There is not a google employee teaching a classroom full of cars who to hit in special circumstances. In fact, teaching autonomous cars is nothing like drivers ed. Machine learning is a programming field focused on getting machines to act without being explicitly told what to do. Programmers use machine learning to teach the computers how to interpret sensor data and act in ways we consider safe driving. This doesn’t make forming trolley rules impossible, but they would require simulations of trolley situations for the car to learn how to act. It is an important distinction; Autos are not magic boxes that we can ask to protect occupants and kill old people if the situation arises and do the opposite in different circumstances. Instead Autos would build a series of decisions in response to a rare pattern of input. This means, if trolley patterns are taught, an Auto will always be on the lookout for trolley conditions and this raises the concerns for false positives.

In medicine, a test for any disease has a small likely hood to return positive results on healthy patients. Normally in a population more people lack a disease than have it, it is often true that many of the positive results belong to healthy individuals. Mammograms, for example, have a 61 percent false positive rate over a decade of annual checkups (Hubbard et al., 2010). In medicine the cost of a false positive is additional screening for the disease that will very likely turn up negative. A second opinion is possible in situations like these. Thus, false positives are not a massive threat to the health industry. False positives for trolley situations on the other hand do seem to pose a massive threat.

Autos do not have the luxury of second screening; they must act once their algorithms have decided a trolley situation is occurring. Computers are not infallible and strange inputs could lead to trolley conclusions. A false positive for an Auto will cause it to kill the occupants or pedestrians it has been taught to. If we teach the computer to be trigger happy with such events it will no longer prioritize the safer outcome. We can be relatively certain the false positive rate will be high, given the fact that real trolley situations are almost unheard of. Additionally, heroic situations where a driver swerves to avoid hitting a bus of children and dooms themselves in the process are very likely to be the result of unsafe driving conditions like speeding or not paying attention. The US Department of Transportation found 94 percent of accidents were due to human error like driving recklessly and not paying attention. The remainder were due mostly to car malfunction and unsafe driving conditions like glare and slick roads. The Safety report had no comment on accidents involving moral decisions. In the 2 million miles Google’s Autos have driven, no lethal accident has occurred and only one recorded accident can be attributed to the fault of the program. Google still considers their cars to be in Auto infancy and they already have a better track record than an average human. Technically their driverless car division, now called Waymo, is only 8 years old.

By trying solve the trolley problem for Autos we will very likely create a system that is more dangerous than just ignoring it. Presented with an unavoidable situation an Auto today will brake as fast as it can. That is the best we should hope for. Lest we risk cars going on occasional killing sprees when it misunderstands a situation that is otherwise safe. One thing is clear, autos cannot come fast enough. Nearly all wrecks are the fault of the human driving the car and no one else, if autos can avoid these we need to pursue them as fast as possible.

Works Cited

1. Hubbard RA, Kerlikowske K, Flowers CI, Yankaskas BC, Zhu W, Miglioretti DL. Cumulative probability of false-positive recall or biopsy recommendation after 10 years of screening mammography: a cohort study. Ann Intern Med. 155(8):481–92, 2011.

2. Singh, Santokh. “National Motor Vehicle Crash Causation Survey Report to Congress.” Crash Stats. National Highway Safety Administration, July 2008. Web. 12 Mar. 2017.