THE ETHICAL CONUNDRUMS OF SELF-DRIVING CARS ARE UNAVOIDABLE BUT PERHAPS THEY DON’T NEED TO BE

In the Unites States we live with the tragic fact that conventional manual drive cars and their drivers are responsible for approximately 30,000 accident related fatalities every year as a consequence of human error and negligence. Everyday many thousands more lose their lives on roads around the world. Yet our society both domestically and internationally have learned to accept and largely live with the risks that come with the widespread use of automobiles.

When it comes to self driving cars ethical conundrums are unavoidable. If an unforeseen circumstance occurs on the road and the choice is to hit one person to save to save 10 what do you do? How would a car make such a decision? What criteria would it use? Who gets to select the criteria options and how those options are ranked? Who gets to choose who lives and who does not? When the choices are between hitting 1 person to minimize overall loss of human life or hitting 10 people how that choice is made can get really complicated really fast. Look at it this way, if you were to shuffle around who the one person is and who makes up the group of 10 then leave that decision up to humans, they could flip-flop back and forth all day as they try to quantify the value of one human life over another. Conceivably a machine could make the decision for us without wavering at all. Perhaps a self-driving car would leverage facial recognition andgait analysis to identify the persons involved in a possible accident scenario, then take into account their life expectancy, status in society, criminal background and make a decision based on that. The real question is do we as a society really want that? If not why not? If we do is that really a better way to go or not? Does any of this really matter to us or do we simply learn to live with the risks posed by self-driving cars in the same way we have with conventional manual drive cars?

Say for example, in a possible accident scenario person one is a five year old and the group of ten is comprised of 90 year-old individuals. Would this change how the decision is made based on life expectancy? What if the group of 10 are recently released prisoners or some other group society deems less valuable then what? Or what if person 1 in that same scenario is a dignitary on a diplomatic visit? Does the car hit the group of ex-prisoners or the diplomat? Imagine another scenario, this time person 1 has stage four cancer, the group of ten on the other hand are all preschool children. Add another variable, what about the passengers of the self driving cars? Should the car protect the passengers lives and safety over that of pedestrians and other cars? If you change who the passengers are in the scenario from children, to 90 year old elders, diplomats, x-prisoners, preschoolers or cancer patients it gets really complicated and disturbing trying to quantify the value of one human life over another.

Just for the sake of argument lets make it slightly more complicated for both self driving cars and their programmers. For example, say you have multiple self driving cars networked together as part of the IoT, each sharing real time analysis about passengers and pedestrians in the milliseconds preceding an accident. The cars would then be required to make both individual and consensus decisions about who should live and who should not; hence the complexity.

While these decisions are disturbing and uncomfortable to think about, the implications of the technology must be considered. As a society, we can’t simply avoid this topic because of the difficult ethical challenges it poses. Instead, I propose changing the focus a bit. How about we simply learn to live with the fact that with the use of any technology comes certain advantages disadvantages and risks. There is no perfect safety or security so why not choose to focus on leveraging this technology to simply make the act of driving safer? The auto industry could choose to work toward moving that 30,000 person accident related fatalities number down by 3% per year every year going forward.

We could choose, as Tesla is doing now, to teach self driving cars to be better drivers and allow cars to learn how to avoid accidents entirely. Humans can teach cars to be better drivers and self driving cars could teach and assist humans to drive more safely and avoid accidents. Will the system be perfect? no but its a start.

More Info:

NHTSA statistics on automobile fatalities

http://www-fars.nhtsa.dot.gov/Main/index.aspx

The Trolley Problem

http://philosophyfaculty.ucsd.edu/faculty/rarneson/Courses/thomsonTROLLEY.pdf

Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?

http://arxiv.org/pdf/1510.03346v1.pdf

How to Help Self-Driving Cars Make Ethical Decisions

http://www.technologyreview.com/news/539731/how-to-help-self-driving-cars-make-ethical-decisions/

Why Self-Driving Cars Must Be Programmed to Kill

http://www.technologyreview.com/view/542626/why-self-driving-cars-must-be-programmed-to-kill/?utm_campaign=socialsync&utm_medium=social-post&utm_source=google-plus

Drivers Push Tesla’s Autopilot Beyond Its Abilities

http://www.technologyreview.com/news/542651/drivers-push-teslas-autopilot-beyond-its-abilities/

Tesla’s new autopilot system is relying on the cutting edge of machine learning, connectivity and mapping data

http://fortune.com/2015/10/16/how-tesla-autopilot-learns/

Like what you read? Give Kenneth Harrell a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.