O.M.G. THE TROLLEY PROBLEM!!!
or: how I learned to stop worrying and love autonomous cars
- Lawyers exist.
- Artificial intelligence does not exist.
- Engineering exists.
- The Trolley Problem is dumb anyway.
For those who haven’t heard of the Trolley Problem, let me briefly summarize. In part one, we have runaway trolley that we can let crash into the center of town, killing millions of kind, wonderful people. Or we can divert it into a side track where it only kills one person, who is very old and was a mean person anyway. Clearly no real issue here, we divert the car and kill the mean old lady. But in part two of this serious ethical dilemma, having established that n lives are of greater value than n-x (where x>=1) lives, we now have no side track on which to divert the trolley. But what we do have is a really fat dude that we can throw in front of the trolley, bringing it to a stop, and saving countless others. So the question becomes an ethical dilemma: if we can save numerous lives, do we willfully and intentionally kill someone?
This problem has been repeatedly cited in the context of autonomous cars. Somehow, people imagine the poor car stuck in this awful ethical dilemma of having to decide who will live and who will die.
This is ridiculous, and frankly can only be posed as an issue by people who do not understand how our world works. And one of the fundamental parts of our working world, often maligned and overlooked, are lawyers.
Why do we have lawyers? It’s not merely to have a group of people who will always be fair game for horribly offensive jokes (“How many lawyers can you fit in a coffin? ALL OF THEM!!!”) No, Lawyers exist to figure out who to blame when crap hits the fan. For example, if an autonomous car kills someone who do we blame?
The answer will never be the computer. (Or at least will not be for a very, very, VERY long time.) The answer will always be the people who built the computer.
To understand why the lawyer argument is so relevant here, we come to our second point in the overview above: there is no such thing as artificial intelligence. At least, not in the Isaac Asimov, “I Robot” sense of the phrase. We’re closer to having flying cars than we are to having real sci-fi AI. Yes, we have many clever little algorithms that can do some surprising and impressive things, but when it gets down to it, we can still predict what these algorithms are going to do. Or more specifically, we can design these AI algorithms to do whatever we want.
This matters because when we talk about cars out there on the road making decisions, what we are really talking about are engineers in little cubicles deciding how the car will operate. And after the car has killed someone, and the lawyer steps in to do their job, it is these engineers and their bosses who are going to get sued for what happened. Because ultimately, these are the people who are responsible for what the car did (and this may continue to be true even if we ever had real AI).
Thankfully engineers in cubicles know that they are designing their products in a world full of lawyers. This is what makes lawyers truly valuable in our society. They don’t merely tell us who to sue after someone has died — they also tell us how to make things so that we won’t get sued. Put this way it might sound horrible, but in its most positive spin lawyers help us design things in an ethical manner.
Engineers will not create a system that selects people for death. The legal reasons alone are enough to prevent it. But a full appreciation of the irrelevance of the Trolley Problem comes from an understanding of how engineering works. Engineers design things for the real world, and they go out and measure and test and gather statistics on how the real world operates, and make design decisions based on that. And then they set their design loose in the real world, and measure how it works.
What would that engineering procedure look like for choosing who to hit in a car accident? Well presumably there would be a lot of simulation. And hope and guesswork. And then you’d set your system loose in the real world, and test it out. But this is where we hit an engineering wall. Most likely the tiny number of autonomous cars, and a presumed safety benefit from a machine doing a better job than a person (even in beta testing), would mean that there’d be essentially zero data to feed back into your design. This means that it is irrational engineering to create a complex system which can almost never be tested in the real world. And when it is tested, then what? Each accident is likely so different from the next that coming to a simple conclusion on what we did wrong and what we should do better is probably beyond the scope of the engineering process. Which brings us back to the lawyers again who will sue for creating a ridiculously complicated and unproven system and setting it lose in the real world.
Hit the brakes. Turn away from people. Turn away from other objects. This simple approach can be supported by engineering, and improved over time. Selecting people for death simply can not be engineered if for no other reason than that we ourselves may not be able to tell if the computer’s choice was right, and therefore defensible.
There’s an even more fundamental reason why the Trolley Problem isn’t a problem. We all live by a principle in our society that most of us (except perhaps doctors) tend to take for granted: First Do No Harm.
Fortunately the Trolley Problem is an ideal model for demonstrating this principle. Suppose we push the fat man in front of the trolley. What happens next? We hope that the trolley stops, the fat main dies without suffering, and we save lots of lives. But who knows? Maybe the trolley derails and crashes into a high-rise, which catches fire and burns down, killing even more people? Or the fat man survives, and he seeks his revenge on my family in a vendetta that crosses generations?
This is why the Trolley Problem is ridiculous right out of the gate. Basically, we human beings are complete morons, and we really have no idea what affect our actions will have. More importantly, we know that we don’t know what will happen. In this context, killing the fat man is clearly indefensible. But more importantly, the car is in the same situation. Whatever we simulated as possible outcomes, those simulations would be inaccurate. The car, like us, can’t predict the future with sufficient accuracy to make choices about who will live and die. The black ice, the gust of wind, the unknowable traction off the edge of the road, and a million other variables make it an impossible task.
The car makes plans, and God laughs.
Yes, engineers will have to decide what a car should do in the event of an accident. But it will have to be feasible (no sci-fi artificial intelliegence), ethical (or at least lawsuit-proof), and supported by available evidence (engineered according to known methods). When someone is killed by an autonomous car, the legal question will then be the more familiar question of “What went wrong?”, and not the anthropomorphic “Why did the car do that?”
The Trolley Problem is, in a sense, misdirection. The important question isn’t about the car’s decision making, but about how the car got into a situation where it was going to crash. Was it a programming error? Or more likely an error from a human driver of another vehicle? A mechanical failure? A drunk or suicidal pedestrian? By the time this alleged Trolley Problem happens, several other things have already gone wrong, and while none of them are as sexy as the Trolley Problem, they are the problems that demand our attention, and which we might actually be able to solve. Design the trolley so that it’s unlikely to roll out of control.
Someday we may have true artificial intelligence. But that day is so far off that it has nothing to do with any rational current discussion of self-driving cars. These cars will be designed by engineers who are good at solving problems. But engineers also need to be good at recognizing which imagined problems can and should be ignored. In this category, the Trolley Problem tops the list.