Reading 12: Our Driverless Future

Waymo self driving car. Source.

More than 37,000 people were killed in car crashes in the United States in 2016, as one of our readings from The Atlantic states. Traffic is also a massive problem in the US, just ask anyone in LA.

AVs, or autonomous vehicles, are one proposed technological advancement that proponents argue will solve both of these problems.

As is also mentioned in the article previously linked, the vast majority of these fatal car crashes can be attributed to driver error. It’s reasonable to think that perhaps a well-developed AI system can avoid these problems. An AI won’t get tired. An AI doesn’t get distracted by their smartphones. An AI can’t drive intoxicated.

However, we’re definitely not there yet. The tragedies discussed in the articles about the Uber Tempe accident and the Tesla accident in Mountain View go to show that. While it’s difficult to know if a person would have avoided the fatal collision in Tempe, since it was dark and there wasn’t lots of time to react to Elaine Herzberg stepping onto the road, the AI system should certainly have been able to avoid it as the Wired article linked previously discusses (The AI wasn’t fully to blame, as Uber handicapped the safety features of the car in order to rush testing.). However, a human driver would have certainly avoided the fatal crash in Mountain View, as the AI system swerved into the highway dividing barrier.

If nothing else, these incidents certainly illustrate that the AI powering self-driving cars is not yet infallible.

However, much of the arguments for driverless cars focus on all of the lives that they will save someday and how much safer it will be. I do not doubt that we may eventually reach the point of having extremely well-refined driving systems that can communicate with one another and avoid nearly all collisions. However, we have a long way to go still, and we shouldn’t be cavalier in taking human lives to get there.

As this article discusses from The Atlantic, nobody needs self-driving cars right now. We can get by just fine without them for the foreseeable future. In fact, we would likely be better served in the short term to simply invest in infrastructure and public transportation in order to solve the problems introduced at the beginning of this article.

We can get to a world of autonomous vehicles, but we shouldn’t disregard safety to get there. In the article from Jacob Silverman, he talks about how the CEO of Toyota North America, James Lentz, essentially claims that these lives that have been lost are necessary to innovate, an unavoidable cost of progress. This blatant lack of compassion for human lives cannot be our best course of action. We can minimize lives lost and innovate, even if it takes a little longer.

Of course, that opens a whole other can of worms about how self-driving cars should be programmed to handle difficult life and death situations. How can a company decide who deserves to live and who deserves to die? Ian Bogost in the article from The Atlantic discussing the trolley problem is right that it may not be the most pressing issue regarding autonomous vehicles today, but it is an issue and one that will have to be addressed one way or another.

There are real drivers on the road that are faced with decisions like this, and so there is no reason to believe in a world where all vehicles are self-driving that the AI would not have make similar decisions. Even though self-driving cars may be better at avoiding collisions, poor weather and road conditions coupled with possible malfunctions realistically will still cause these situations to develop.

And frankly, there is no clear solution to this problem and certainly no easy one. Many people may advocate for a utilitarian approach that will decide to save the most lives when faced with such a situation. However, I’d wager almost no one would ride in a car with an AI programmed to potentially make a decision to kill them if it maybe saves two pedestrians or so. Personally, I wouldn’t buy such a car. As living things, we are biologically wired for self-preservation and aversion to such potentially risky situations.

Another tricky aspect of self-driving cars is the legality of a self-driving car collision. There’s no driver to take fault for a poor decision that led to a crash. You can’t sue a software, but you can certainly blame the company behind it. Volvo has already agreed to take responsibility for crashes involving its self-driving system. But frankly, this is going to be a very controversial and heated fight to establish the precedent for self-driving car collision liability. It can’t be nobody’s fault. Someone has to pay for damages, but who?

Politically, there is a lot to do to catch up to the current autonomous vehicles development as well as prepare for the future where self-driving cars are ubiquitous. The government should certainly step in to some extent in order to prevent future avoidable casualties from the real-world testing of self-driving cars. While it will be difficult to establish a set of standards for the technology and the industry, simply because there’s so much variation from company to company in their development. However, there can certainly be the baseline regulations that emerge to help protect citizens. For example, the Tempe crash could have likely been avoided if Uber did not disable the emergency braking, remove LIDAR sensors, and fail to put a dedicated safety driver in the car. The government could require emergency brakes and dedicated safety drivers. These are two easy legislative restrictions that would perhaps prevent future loss of life.

Personally, I would be open to owning a self-driving car in the future, but that frankly comes with many caveats. For example, I would not want a self driving car if there wasn’t an option to drive myself. Additionally, I would want to be able to influence the way the self-driving car drove. That is, I would want control of how fast the car decides to drive under different conditions. Furthermore, I’d desire to know that the car would not decide to wreck and put my life in harm’s way when faced with tough decisions. Or, I could also accept a vehicle that decides to crash and save others if the safety features were good enough that my risk for serious injury or death were extremely minimized. So basically, I’m not ready to have a self-driving car, but there’s a possibility that I’d be accepting of it in the future depending on its progress.

--

--