Trolley Folly

Edwin Olson
Nov 14, 2018 · 3 min read

A favorite debate around self-driving vehicles is the “trolley problem”: a self-driving vehicle finds itself in a pickle and must choose between two terrible outcomes (see this Washington Post Article). For example: should a self-driving car crash into a school bus in front of it, or should it drive up a curb and take out a group of senior citizens sitting on a park bench?

The trolley problem is an attractive philosophical exercise because the question is neatly formulated — it’s a simple A versus B choice. However, the answers are not neat or simple, which is why they can lead to such engaging conversations.

Trolley problems have very little to do with the real-world in three major ways. In the real world…

  1. We don’t fully know the state of the world: the schoolbus may be going faster than we think; the park bench may be stronger than we think.
  2. The consequences of our choices are far from clear: hitting the curb might deflect our car; the schoolbus might be empty (and thus no children could be injured).
  3. We don’t know what other people will do: perhaps the school bus driver will swerve out of the way; perhaps the seniors are veterans of Cirque du Soleil, already catapulting each other out of harm’s way. (It could happen!)

The point is that the real-world is terribly complex. Unlike a Trolley problem, the consequences of an action are incredibly uncertain. As a result, picking an action based on a moral preference for a specific outcome simply isn’t a rational basis for a decision. It’s not that a moral preference is wrong, it’s that the prediction of the outcome is probably wrong.

So, how should self-driving cars make decisions? It turns out that there’s a simple way of making good choices in highly uncertain settings: pick actions that delay disaster.

Suppose you have two disastrous options, one that results in catastrophe in 100 milliseconds and the other that results in catastrophe in 500 milliseconds. The right thing to do is to “pick” the second one. Why? Because our prediction of catastrophe in 500 milliseconds is far more likely to be wrong than the one only 100 milliseconds in the future. A lot can happen in a short amount of time — we might discover a fantastic third option, or our brake performance might be better than expected, or one of the other road users might find their own way to forestall the catastrophe, buying everyone yet another 500 milliseconds.

On a practical note, the most important thing to do in a crisis is to reduce the kinetic energy of a vehicle. Every split second of braking is a huge benefit to vulnerable road users. A reduction in speed from 40 mph to 30 mph, for example, reduces the risk of pedestrian fatalities from about 50% to 10% (see page 12 of this NACTO report). Picking actions that defer a predicted collision gives the vehicle more time to reduce speed, and so Physics also argues for the strategy of buying time.

The problem with Trolley problems is that they distract away from the most important aspect of safe driving. The reality is that even humans generally perform dreadfully in a crisis; good drivers differ from bad drivers not because they’re better at emergency maneuvers, but because they do not get into as many dangerous situations in the first place (as I wrote about in a previous post). I’d rather buy a car that never has to choose between the schoolbus and the park bench, rather than the one which makes precisely the right moral judgement every time.

May Mobility

May Mobility is transforming the experience of getting you where you want to go. Our vision is to unlock a better life today through self-driving transportation.

Edwin Olson

Written by

CEO of May Mobility, Associate Professor of Computer Science at University of Michigan

May Mobility

May Mobility is transforming the experience of getting you where you want to go. Our vision is to unlock a better life today through self-driving transportation.