Navigating Around The Question Of Crash Optimization
The future is closer than it may appear. The technology for fully autonomous vehicles (AVs) is getting better, and regulators may soon create a path for AVs to operate on the road without a human driver. As the future becomes reality, more questions will be raised as to how AVs will operate without humans in control. In particular, what happens when things go wrong and a crash occurs? How does an AV act differently from human drivers in a crash scenario?
We know that use of AVs will reduce the total number of accidents, and likely eliminate tens of thousands of deaths per year due to human error, but there will still be unavoidable crashes (because physics). So how will AVs decide which way to turn, how hard to brake, and who to endanger when they cannot avoid a crash? Will AVs be programmed with special algorithms for “crash optimization”? Who will decide?
These questions have led to much debate around the ethics of self-driving cars and the need for an open discussion of how crash optimization algorithms will work. Professor Patrick Lin and others have written extensively and thoughtfully on the ethical issues raised by self-driving cars — including analyses of complex dilemmas involving trolleys, baby strollers, narrow bridges and one lane tunnels. These dilemmas raise more questions than they answer, and some have brushed them aside as unlikely hypotheticals. Others have noted these are classic ethical dilemmas for a reason: there is no good answer.
Thus, whether these hypotheticals are likely to occur, or morally important, may not ultimately matter because even if we debate the issues endlessly among ourselves — governments, companies and AV owners are unlikely to reach a consensus as to who should live or die in unavoidable crashes. The answer will always depend — am I in the car or the pedestrian? Is it my child on the sidewalk? We are human after all.
So how can we move forward to reap the benefits of AVs and also set expectations for public opinion around what will happen when AVs crash? How can we write algorithms for crash optimization if there is no good answer as to which way the AV should swerve — toward the baby stroller or the old person, the SUV or the motorcycle, off the bridge or into the school bus? Or at least there is not a good answer we can agree on.
One path forward might be the use of machine learning to allow self-driving cars to develop crash optimization algorithms the same way they learn other aspects of driving. How would this work? Would it still require humans to pick and choose the training data, involving the same need for ethical judgments? Or could we feed in millions of miles of human driving and let it extrapolate from there? Would using machine learning for crash optimization be different from how AVs are using it to follow roads and navigate obstacles? How would we test it to see the end results?
We are willing to live with the status quo of how humans drive today. Would it be any worse than the status quo, or presumably better since the driver will not be distracted, impaired or drowsy? Many have noted that humans don’t make explicit crash decisions, that they “react” and not “decide” whether to brake or which way to swerve, but whether it’s a decision or not, it seems there is learning there that might be an acceptable baseline.
Ethicists note that AVs can be more precise in executing explicit ethical decisions in ways that humans cannot, that they should do “better” than humans can in the same situation. In other words, why not set the bar higher than what can be learned from human driving? But that assumes that we could agree on what is “better” and program it in with the precision required. Until then, perhaps a path forward involves a more circuitous route.
Who knows, maybe like Deep Mind’s AlphaGo playing a game of Go against a human champion, the AVs might think of a move we hadn’t considered, and maybe then it really will be a “better” outcome.