Automorality

Who does your car choose to kill? A thought experiment.

Thor Muller
Submersible
4 min readMay 29, 2016

--

Image by Kai Friis

This will be your first time riding in a driverless car. It begins with a single life-or-death choice that says much about who you are.

As everyone knows, each vehicle on our astonishingly congestion-free roads drives in a style based on the preferences of its passengers. There is no single algorithm that defines how self-driving cars behave, and no monopolistic company or monolithic government regulation overseeing the ethical rules that govern their choices. We, the passengers, decide how the vehicles will behave each time we enter them.

Most of these settings are about convenience: we can ask our car to operate in sight-seeing mode, which takes the routes that are captivating for seeing the beauty as we pass by. Or we can tell it to take the fastest route which will require the car to chart through priority toll roads and those nasty, crime-ridden areas that the safety-first types always avoid.

There’s only one passenger preference that really matters, though, and that’s how you want the car to respond to mortal dangers on the road. This is your Morality Preference Selection, or just Moral Pref. Car accidents were once the single biggest cause of accidental death before we automated all the vehicles, but removing the people from behind the wheel reduced this rate to a shadow of its former self. Still, self-driving cars continue to kill thousands of people a year — no amount of environmental modeling, high-res sensors or network feedback loops can eliminate all unexpected shocks. And when these shocks happen it is up to the car to figure out how to respond.

The most common danger is when pedestrians or bicyclists suddenly appear in front of the car. In theory this shouldn’t happen — cars can slow down before blind turns, etc — but in practice people and vehicles still sometimes “appear out of nowhere”, and vehicles are faced with the choice to collide with the unfortunate people or take dangerous evasive action such as turn into a building at high speed. Much of the time the decision will be obvious as the risk will be asymmetrical — cars know that breaking a pedestrian’s leg is an acceptable injury if it saves a passenger’s life*. However, sometimes the danger to both sides is about even.

The most extreme version of this problem is this: who does your driverless car choose to kill if forced to make a choice — the people inside or the people outside the vehicle?

As programmers realized when they gamed out the scenarios, there is no single approach to this problem that avoids bias. Nobody wanted to be responsible for programming a decision of such ethical weight.

So instead they produced a variety of Moral Pref algorithms to address the problem. After much trial and error, the market whittled them down to the three options you have in front of you:

  1. “Me First”
    Selecting this option ensures that the car will always drive in a way that protects your life, even at the cost of grave injury or death to others. You are authorizing the car to mow down a group of children if it is required to protect you (and your fellow passengers) from harm.
  2. “Others First”
    At the other end of the spectrum, passengers may instruct their vehicles to always avoid putting others at risk, even if it means they themselves are more likely to be hurt or killed. You’re allowing your car to kill you if it avoids someone else dying, even if that someone is a ninety year old man with terminal disease, or a paroled murderer.
  3. “Society First”
    You can, alternatively, allow the car to use its judgment to decide who to put in harm’s way. It will make an on-the-fly calculation about whether protecting its passengers or external parties would be most beneficial for society — or, in other words, have the highest social utility. In most cases our cars are able to discern the identities of the people inside and outside the car, allowing it to figure in many factors, such the number of people at risk (and their weighted survival probabilities*), their ages and dependents, and the Moral Prefs in their own vehicles, if the collision involves cars. The Society First algorithm is open-sourced, meaning that anyone can view the decision-tree it employs and can even suggest patches to improve its social utility function.

Oh, and Moral Prefs are Public

There’s one more factor that may affect your decision. Whichever setting you choose is public. It is displayed on the exterior of the car, broadcast to adjacent cars, and is automatically presented on all your personal internet profiles. Your friends, family and business colleagues will see which of these options you’ve selected.

You will wear your Moral Pref as a badge.

Which Moral Pref do you select and why?

This is a real problem being explored in practice, as MIT Technology Review detailed last year.

My thought experiment here considers how making this a public choice might affect the evolution of our roads. Based on my reading of game theory and social history, I’d bet we’d end up in a state of equilibrium where most people are opting for Others First or Machine Decides. This assumes we evolve a But perhaps only after we rough it through a period where many or most would choose Me First.

Of course, we could just make our cars “flypaper sticky” as a new Google patent application proposes:

*Thanks to Vinny Lingham for his input on Risk Adjusted Decision-making

--

--

Thor Muller
Submersible

CIO of Off Grid Electric, serial entrepreneur, frontiersman, collector of arcana, and NYTimes best-selling author of Get Lucky