The Ethics of Autonomous Cars

Image © JCT 600 (cc sa 2.0)
If, as I claim, the Trolley Problem is not a good example of the ethics of autonomous driving, then what is?

A while back, I wrote a story here criticizing the role that “The Trolley Problem” plays in our popular conception of the ethics of driverless cars. (Short form: “The first ethical question you should ask if you’re building a driverless car is not ‘Who should it kill?’”) If, as I suggested, the Trolley Problem is not a good example of the ethics of autonomous driving, then what is? This article will try to suggest some examples.

Killing the Runaway Trolley Problem
The nearly half-century-old “Trolley Problem” is a runaway success, one that itself needs to be brought to a halt.medium.com

One of the very first questions is how autonomous should a vehicle be before it is sold to the public? There are two major lines of thought here. The approach taken by companies like Tesla, Audi, BMW and Mercedes-Benz is incremental. As features such as collision avoidance, lane sensing, lane changing and self parking are deemed safe and reliable, they are rolled out to the public. The cars need human supervision, but are gradually more capable. Google and Ford, on the other hand, say that they won’t release their self-driving features until the vehicles are fully autonomous. Drivers are too ready to turn over control to the automation before it is ready.

Both approaches are, at their heart, ethical positions. Elon Musk has explicitly said that if an autonomous feature is good enough that it will save lives, then it is ethically wrong to withhold it from the public. Google, on the other hand, bases its position on its experience with its own employees, who were taking foolish risks in over-estimating what the cars could do. Knowing that people will do this, they argue, means that launching prematurely is putting the public at unnecessary risk.

Each also brings with it a set of follow-on questions. If you side with Tesla and the many other gradualist companies, then you have the very real problem of how to insure that the vehicles are properly supervised by human drivers. BMW’s answer is that they hold back releasing new features, in order to make sure that customers realize the limitations. Tesla, which is more aggressive about releasing features, is also more aggressive about insuring that the human is paying attention. Their recent version 8 software is more strict than earlier versions about testing whether the driver is still paying attention, and turning off autopilot features if they are not.

If, on the other hand, you side with Ford and Google, you are left with the question, “What, exactly, does ‘fully capable’ mean?” Driving in the milder weather and straighter streets of California is quite a different experience from the twisty roads and winter weather of Boston, or of Vermont where fully half the roads are unpaved. The presence or absence of lane markings or their visibility in bad weather can be key to whether a human or automated driver is “fully capable”.

Not only the road conditions are an issue. There is also whether or not specific areas are suitable for a given usage. This is an issue that has come up with navigation systems and applications like Waze, which will avoid traffic by finding alternate routes, often through residential areas that were not designed for heavy through traffic. By the time that cars become fully autonomous, achieving what the Society of Automotive Engineers (SAE) calls “Level 4” (the first level with no human supervision on the SAE’s scale from 0–5), navigation and route-planning will be handled automatically. Several car companies are talking about achieving Level 4 in just 4–5 years. Tesla is forecasting it by the end of 2017.

The trade-off between delaying arrival at your destination and putting children and pets at risk is not a simple choice. If the occasional car cuts through a neighborhood, that’s one thing; if many do, that’s another. If a great many cars cut through, and have a less than human capability to detect and distinguish children, toys, pets, and other unexpected encounters, then it could be quite serious.

Just as vehicles that are not fully capable need to be supervised, which suggests that the vehicle has an obligation to insure that it is being properly supervised—that there is a human driver, the driver is capable of taking control and the driver is paying attention—collaboration in the opposite direction may also need to be established.

Organizations like Mothers Against Drunk Driving have been pushing for vehicles that can detect a drunk driver at the wheel and refuse to let them drive. What of drivers that are incapacitated in other ways? As cars become more and more capable, how would this work? What would be the limits?Besides refusing to start, are there circumstances where a car should take over from a driver even against the driver’s wishes? How about someone who is incapacitated by something other than drunkenness, such as injury or a heart attack? Someone who is distracted by texting or some other activity?

Conversely, are there emergency circumstances when a vehicle should be willing to drive through an area where it would not normally be allowed to? The difference between Levels 3 & 4 and level 5 is that in the earlier levels, autonomous driving is allowed only “in certain driving modes”, such as highways vs residential. Should a driver be able to say “Get me to General Hospital. I’m having a heart attack,” and have a Level 4 car go through an area it is not rated for, perhaps slowly or with hazard lights flashing?

One can imagine a scenario where a car calls 911, and asks for help or an escort or just informs police as to what it is doing. Would this ever be permitted? Remember, Ford and Google are talking about cars that have no steering wheels in 5 years or so. What if there is a life and death emergency and the only available vehicles is a steering-wheelless Level 4 that is only allowed on certain kinds of roads. Should there be some override? Alternatively, should Level 4 even be allowed to have no human controls, and thus be incapable, even in an emergency, of traveling in some areas or circumstances?

Recently the question of who should be able to make cars autonomous has started to raise its head, as after-market autonomous driving kits and upgrades have begun to make their appearance. Hacker George Hotz reverse engineered a 2016 Acura and added his own control and AI system, resulting in something like $3M in backing for his start-up Comma.ai. Several other startups such as Perrone Robotics, Drive.ai, AImotive and others are working on after market products targeted at individuals, researchers or fleet operations. Third party add-ons, especially those such as Hotz’s that rely on reverse engineering, would seem to have a higher risk that those done with the direct involvement of the vehicle’s manufacturer. How should that be controlled?

As you can see, there are lots of ethical questions raised by the advent of the autonomous and semi-autonomous vehicles, and the list is growing. As they become more real, as we see how they fit in in various parts of the world, how they perform, more questions are bound to arise. They also bring a number of higher level questions with them, questions about how we as a society should deal with them. Should we pass laws and legislate the answers? What actions should our regulatory agencies take? Should we follow the example of the biomedical fields, where they have established generally accepted ethical principles, and large organizations employ professional ethicists, and ethical review boards?

Given the rapid pace of technology, how fast the capabilities of both autonomous vehicles and artificial intelligence are expanding, it is probably best to not legislate too much, too quickly. We are bound to be responding to yesterday’s problems in tomorrow’s world. A combination of government regulations and professional and industrial standards is probably a wiser course, and before that, we should engage in a fair amount of research, thought and public debate.

One possibility that intrigues me personally, as a philosophy major and ethicist, is that by striving to answer complex questions simply enough to enable us to implement the solutions in software and machine learning systems we will be able to do what the Trolley problem strives for—that is, shed light on solutions to problems that philosophers have debated for centuries. The trolley problem is too narrow and artificial to accomplish this, but looking at rich, real world issues, and trying to boil them down to a simpler, more manageable form may be more fruitful.

The need to answer these questions is great. Fully autonomous vehicles are on the foreseeable horizon. Still, we have some time. We don’t have to get everything perfect on the first try; but the time is at hand to set aside the artificial, toy problems, and address the real ones. I hope this article provides a plausible starting point for at least some readers.