This is the solution to the infamous self-driving car ethics paradox

Should a self-driving car kill its occupants in order to save 10 people?

That’s the question that many are asking as self-driving cars rapidly become reality. And just last week, an admittedly fascinating article surfaced on MIT Technology Review regarding a popular catch-22 ethics dilemma about self-driving cars. Put another way, should self-driving cars always put the needs of the many above those of the few or the one? Admiral Spock would certainly agree. But then, as even he has often admitted, logic is only the beginning of wisdom; indeed, a logical answer can still be very wrong in certain situations.

Here’s why this is really a non-issue for which there’s already a simple, obvious solution, and why it mustn’t be allowed to slow the development of self-driving cars and any necessary legislative issues.

First, it’s important to fully flesh out this ethical dilemma which is more complex — and interesting — than may appear at first blush. The decoy solution is to suggest that all self-driving cars should obviously drive in a manner that minimizes loss of life. In the proposed hypothetical then, obviously all self-driving cars should take whatever action necessary to avoid killing n+1 people, where n equals the number of people in the car. But this would be remarkably myopic, not to mention sub-optimal, and just pain wrong.

The paradox

The problem is the secondary result this solution would produce: if all prospective buyers of self-driving cars know that the car will be always programmed to kill its occupants in order to save a greater number of other people, then this will reduce the adoption rate of self-driving cars: things that try to kill you are typically not very popular with prospective buyers.

The problem then is that self-driving cars will never reach mass market appeal, and as a result, the far more lethal human-driven cars will remain forever on our roads. This is a catch-22. By virtue of avoiding a thing which may sometimes try to kill us, we actually increase our likelihood of being killed.

In other words, we’re better off agreeing to, say, 1-in-25,000* odds that our car will try to kill us instead of others, rather than maintaining the status quo and risking a 1-in-5000 chance of death. But we humans simply aren’t that logical. Seriously.

The experiment then surveyed people to let them decide how they wanted the car to behave. The results were not encouraging: generally, people favored autonomous cars that sacrificed their occupants in defense of others, but only for other drivers. Put another way, nobody wanted a self-driving car for themselves, but were fine for others to have them: the paradox.

The “reasonableness” test

Clearly then, we’re at an impasse. Thing is though, we don’t need to be: we should just model the cars after how a reasonable person would drive in emergency situations. Fortunately, there’s plenty of legal precedent for this approach.

When determining negligence, one of the questions is whether a “reasonably prudent person” would act accordingly. The textbook case on point was a taxi driver who swerved onto a sidewalk causing injury to a pedestrian because his passenger had threatened him with a gun. The issue was whether an ordinary person might react this way in such emergency circumstances, or whether the defendant had acted unreasonably and thus negligently. And so we have the so-called “emergency doctrine” which is a valid defense for otherwise negligent acts.

Here, we can probably make some assumptions about society as a whole: reasonable people will typically do whatever it takes to avoid killing themselves or their occupants, provided such protection doesn’t require them to swerve into other pedestrians or bicyclists, or perhaps even motorcyclists, with a particular emphasis on avoiding children and the elderly. It’s just not the way reasonable people react in such emergencies.

So hypotheticals such as those presented in the paper — and all over the internet (here, here, and here) don’t need to be made into such complicated messes: just program cars to drive the way reasonable people would behave in emergency situations, apply the emergency doctrine to cars, and we have our solution.

If ever there was an example of the simple solution being the right one, this is it. This doesn’t need to be so complicated. We already have ample case law addressing such issues; all we need to do now is program self-driving cars accordingly. And even if they still get things wrong now and then — which they will — we will still have a solution orders of magnitude better than the status quo which is unacceptably bad.

Let’s get these cars on the road now.

Meanwhile, let us know in the comments what you think is the best solution to this dilemma. Do you agree with the emergency doctrine and the reasonableness test?

Follow me on Twitter @MarcHoag
Follow me on
@Quora
Check out
Twibble.io for easy automation of RSS feeds to Tweets!

__________
 * My estimate based on The Economist’s report in August that stated “if 90% of cars on American roads were autonomous, accidents would fall from 5.5m a year to 1.3m.” So figure 1/5 fewer accidents must mean your chances of dying should decrease by at least that much.


Originally published at innovately.wordpress.com on October 27, 2015.