An Autonomous Car Might Decide You Should Die

But that prospect isn’t as scary as it sounds

Mitch Turck
Backchannel
Published in
8 min readMar 10, 2015

--

The “trolley problem” is an old, familiar thought experiment in ethics. Lately it has been enjoying a rather outlandish level of exposure. Much of the credit goes to journalists applying the problem to autonomous cars. Those discussions end up feeling remote and theoretical. Yet by revising the problem just slightly, it can contribute to both a realistic and urgent debate on self-driving cars… which is precisely what I’m fixin’ to do in this article.

Google Trend for “trolley problem”. Apparently the holiday season makes people really, really philosophical.

First, let’s make sure everyone is on the same page. What follows is one iteration of the original trolley problem:

Version 1: There is a runaway trolley barreling towards a group of five people standing on the track, none of whom are cognizant of the train’s approach. They will surely be killed unless the trolley is diverted to its alternate track, on which only one person is standing. You are situated at the track switch, and are the only one able to divert the trolley so that it kills one person rather than five. Would you throw the switch?

Version 2, “The Fat Man”: Because most of us would claim that throwing the switch is the obvious answer, an alternate version of the problem was created to instill a more realistic sense of responsibility and challenge the otherwise utilitarian response. In this alternative, there is no second track, nor any switch you can throw to save the five people. The only thing you can do to stop the trolley is to shove an extremely fat man standing beside you onto the track, which will bring the trolley safely to a halt—but obviously kill the fat man in turn. Now, the question becomes less black and white: is it best to intentionally cause harm to one person for the greater good? Or is it best to not use your judgment of a situation as the decider of anyone’s fate?

Ok, so wait: I must push beloved Family Feud host Louie Anderson into a train, in order to save five assumed non-Family Feud hosts? I can’t get behind that.

I suggest a new version of this debate; one that I think is utterly necessary as we prepare to hand responsibility to the robots.

Consider “The Infinite Trolley.” You are now the conductor of the trolley, steaming down that single track towards a solitary victim-to-be stuck in your path. You can simply hit the brakes to reach a stop and save this person.

There is, of course, a caveat.

Your trolley is infinitely long. It’s filled with as many passengers as it takes to make you reconsider stopping the train. Thousands? Millions? Billions? All those people, each of them with their own needs, expectations and responsibilities, all of which will be thrown off by varying degrees should you decide to stop their trip. Now then, what’s your price? How long does your trolley need to be for the convenience of many to outweigh the life of one?

So far, I’ve posed this dilemma to four very intelligent people. While their reactions and conclusions varied, not one of them was willing to consider that this could be a real-world problem.

It is.

Americans take roughly 250 billion trips with their cars annually. In the process, we kill over 30,000 people through traffic accidents—which means that one car-related death is deemed an acceptable price to pay for you to have the convenience of taking 8 million trips. Or, for the sake of this article, for 8 million of us to take one trip in a very large vehicle.

So, with just a dash of fuzzy math, the Infinite Trolley problem is solved: you would choose to run down the victim if your trolley had more than 8,000,000 passengers on board. And when I say “you would,” I mean “you do.” Now, don’t tell me you’re refusing to take responsibility on the grounds that running someone over involves intent, and is an entirely different act from merely being aware that someone will have been run over for your benefit. To that, I can only respond with the wisdom of South Park: it sure is nice to have your cake and eat it too.

Those who have read my earlier piece, I Find Your Lack Of Faith In Autonomous Cars Disturbing, probably know where I’m going with all this, but humor me: if you’re among the many who opted to throw the switch or stop the train, why are you not aggressively supporting autonomous fleets?

Virtually all evidence and logic states that we would kill far fewer people with an autonomous vehicle grid—even now, with the technology still in its infancy. A bit of reading on the topic, however, reveals that many vested parties, from politicians to car manufacturers to Google-related sources, have all made some implication of the following statement:

Autonomous car technology needs to be perfected before we can bring it to the market. 99% isn’t good enough.

This statement is—in addition to being mathematically misleading to most of the population—effectively a manifestation of the decision not to throw the switch for entirely selfish reasons. By refusing to release a safer technology to the public until it is “perfect,” these parties are telling us that they don’t want to be responsible for having made the decision to kill one person, because the decision to let five die is an outcome society has already learned to accept without judgment. That could be rational if the follow-up to such a statement was, “so let’s pour significant funding into autonomous tech, and fast-track established developments to market.” But for the life of me, I can’t seem to find that blip on the radar of our nation’s transportation strategy.

I suppose I can understand this position coming from a Google or Uber, because they’ve never before been involved in any conversation about the costs and plight of lost life due to automotive travel; by sticking their nose in the situation they run the risk of being seen as causing new deaths rather than preventing all those “regular” deaths we don’t seem to mind. But to the entities who do answer for the 30,000+ annual deaths—the NHTSA, DOT, car manufacturers, the dissenting public—shame on you. You’re making the case to kill thousands of people every year, and getting off scot-free thanks to mere misinformation. There will very soon come a point where autonomous technology proves itself to be consistently safer on certain roads or in certain cities, and it would be a tragedy for the powers that be to neglect or hinder such a development simply because it requires owning up to a decision and jarring people loose from the routine which keeps them predictably tranquilized and non-participating.

Good grief. Let me regain my composure, because I haven’t gotten to the point of why the Infinite Trolley problem needs to be addressed.

What the Infinite Trolley does—as illustrated by our little math exercise above—is stage a scenario that forces you to realize there is currency serving as the building blocks for the things we value. Loosely put, our example depicted one human life as being worth 8 million units of currency, but its origin goes deeper than the mere convenience of transportation. And, even if you disagree entirely with the math and believe one human life is priceless, then the logic would follow that two human lives are doubly as priceless, and so you too make choices based on currency.

Why is this important? Because as autonomous vehicles have progressed, we’ve come to realize that we can engineer them to solve problems like the trolley dilemma, thanks to the speed and volume with which they can process and act on data. Moreover, the nature of the statement “we need self-driving cars to be perfect” is not only that we can engineer these cars to solve such problems, but that we must.

Trouble is, in order for self-driving cars to solve the problems, they need to have a grasp of said “currency” in order to work out the decisioning logic as situations present themselves. In short: they need a level of awareness we don’t have about ourselves.

Journalists citing the trolley problem have posed questions such as, “will a self-driving car decide in an emergency that it is best to kill the driver rather than kill a pregnant mother?” This is not the shocking wake-up call we are about to encounter. No, what you might find instead is that the autonomous grid chose to slaughter a raving lunatic dancing in the street because he was holding up traffic, and as such, was mathematically hurting society by way of the fact that his life is worth 500,000 units of currency, whereas the time and productivity lost by the thousands he is delaying amounts to 1,000,000 units of currency. Should we instead decide not to program self-driving vehicles with any logic more complex than, say, Isaac Asimov’s laws of robotics, we would likely find the entire autonomous grid grinding to a useless halt on a daily basis for the sake of saving a life. Such is the consequence of stopping the Infinite Trolley.

On some level, I believe many of us realize we are fast becoming inefficient parties in the face of artificial intelligence. Science fiction movies where the robots have “deemed us expendable” give many people cause for concern, fleeting as the thought might be. However, we may find out all too soon that autonomous cars will have been the first salvo fired off by robots in such a future. We program them to understand our values. They make decisions that reflect our values. The decisions frighten us. What does that mean?

Read Part II here for a practical approach to solving this problem, and chime in with your opinion.

Follow Backchannel: Twitter | Facebook

--

--

Mitch Turck
Backchannel

Future of work, future of mobility, future of ice cream.