Autonomous Driving Ethics

David Silver
Self-Driving Cars
Published in
2 min readOct 12, 2015

Chris Gerdes is a Stanford engineering professor working on driverless race cars. I imagine he’s doing some pretty neat technological work, but he’s made the press recently for a more philosophical reason — the ethics of driverless cars.

Bloomberg doesn’t do a terrific job raising the different ethical issues that might arise for a robot driver, but it gets the ball rolling and it’s not hard to imagine from there:

Take that double-yellow line problem. It is clear that the car should cross it to avoid the road crew. Less clear is how to go about programming a machine to break the law or to make still more complex ethical calls.

One potential dilemma, for example, is how to program for the famous trolley problem. If a computer has to make a decision between staying on course and killing 5 people, or veering off the road and killing one pedestrian, what do we program the computer to do?

What if it’s a 25% chance of killing 5 people against the certainty of killing one person?

These are pretty extreme examples, but even the more mundane decisions aren’t entirely clear. Should driver-less cars adhere rigidly to the speed limit? 5 m.p.h. in parking garages?

What if another driver motions at the car to proceed out of order through a stop sign?

What about a late merge that requires crossing a solid white line?

I don’t expect these would be insurmountable issues, but they will make explicit the extent to which we implicitly assume the violation of our traffic laws.

Originally published at www.davidincalifornia.com on October 12, 2015.

--

--