Autonomous Vehicles Will Choose Who Lives and Dies — What Are We Going To Do About It?

This past weekend, I drove a Tesla for the first time and man was it awesome. A powerful machine that makes no noise on the outside with the latest in software replacing a historically clunky user experience on the inside. It felt like an iPad on wheels and was a driving experience that painted my imagination as to what the future of the automobile is.

Lately, and rather quickly, dialogue around self driving cars has moved from tech silos to mainstream media. The hype is justified. Cars are a significantly underutilized asset (96% of the time they are parked), they take up a disproportionate amount of real estate (all the parking lots in the US are equivalent to the size of Connecticut!), humans are bad drivers (accident frequency is going up and to the right), and climate change is real (despite what Washington thinks these days).

Beyond saving lives and protecting the environment (as if those issues weren’t enough), the implications for existing business models will be transformative. Few periods in history have seen such “hockey-stick” moments and we’re at the cusp of another one. It’s true a significant amount of labor will be displaced, but second order effects will create a net increase in new jobs. Marc Andreessen explained the logic for why technology and automation will be net positive for job growth recently at Code 2017:

Improved technology leads to productivity growth which creates more cost-effective production. This increases purchasing power which frees up personal capital. New industries form to take advantage of newly available capital and new jobs come to power these new industries.

We don’t have to look much further than the byproduct of the rise of the auto to see this logic in action. With the birth of Ford Motor Co. came panic and euphoria that technology would replace all transportation jobs and leave everyone stranded. It’s true that the auto did displace pre-auto transportation jobs, but it also paved the way for entirely new industries — three big beneficiaries come to mind. First the construction industry; streets, shopping malls and office complexes are all a reality because of the auto. Second, consumer industries; the actual businesses that lived in these newly constructed complexes needed labor after all. Finally, the car industry itself. In fact, the auto industry became so large, it had to be bailed out in 2008 just to keep the economy afloat. It’s indisputable that the number of new jobs that came as a byproduct of the transportation revolution were at least 100x the number of jobs that were lost.

It’s easy to envision a future in which productivity soars through the roof and creates completely new industries. The way we think about a “car” will undoubtedly change. Many questions come to my mind: (1) Who will capture the real estate in cars? (2) How will the design of cities change? (3) What will the role of the car be? (4) How will we be able to personalize transportation? (5) Will we be incentivized to own cars because they will become an income generator as opposed to a rapidly depreciating asset? (6) If everyone buys a self driving car to generate side income, what are we going to do about traffic?

The next level benefits are hard to project out, but the economic opportunities promise to be large. It’s the precise reason why every tech company and auto company is strategizing on how they will navigate the space.

I pride myself on being a technology optimist; recognizing there are always challenges to society that come with massive disruption (i.e. we seriously need to think through how we will handle the immediate job displacement self-driving cars will cause even though it will eventually smooth out) I see the promise for increased quality of life with new technologies over time.

That being said, I see a major philosophical quandry starting to form based on where we are today. Companies at the forefront of autonomous vehicles are entering into an uncomfortable domain; the winners in this space will decide who lives and dies in the future and by extension are laying the fabric for a system which will codify how we value life in society.

One of my favorite thought experiments when I studied Philosophy at Duke was the Trolley Problem. The problem goes like this:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:
Do nothing, and the trolley kills the five people on the main track.
Pull the lever, diverting the trolley onto the side track where it will kill one person.
Which is the most ethical choice?

This gets interesting with self-driving cars. We can envision the Trolley Problem unfolding except without a live human actor. The question no longer revolves around the judgment call of a specific person in the moment. Rather, it will be reflective of the intentional values of a particular company (more practically a couple of Executives at a company). The winners in this space will decide this issue for the rest of us; the decision logic to the Trolley Problem will be codified into every car that comes off the production line.

What decision would you make? How comfortable are you with a self-driving car making a decision for you in the future?

One of my favorite professors from law school, Jonathan Zittrain, advocates looking at the world from both an “inside-out” and “outside-in” perspective. He defines “inside-out” as being cognizant as to how a specific outcome affects the parties that are part of a given situation; meanwhile, “outside-in” is understanding the effect each outcome in aggregate has on the world.

In the self-driving cars context, I see the “inside-out” as the value criterion that governs how a self-driving car engages with the Trolley Problem; this encompasses which criterion we choose as well as how we weigh them. I see “outside-in” as the parameters that surround the governance process. Ultimately, both perspectives together synthesize the issues at play.

Let’s start with the inside-out perspective: How do we think about choosing the criterion that should govern how a self-driving car engages with the Trolley Problem?

Option 1 is to keep it simple — the only criterion would be to minimize the number of lives lost. This likely won’t work for two reasons: (1) the majority of the time this would lead to a death verdict for the single passenger which would cause a severe adverse impact on ridership. Why get into a self driving car if when an accident occurs, you bear the majority risk? (2) This schema disregards all other relevant context — we would obviously save 1 cancer researcher over 100 terrorists.

Option 2 is more complex and where we may strive to maximize utility. The challenge comes in defining both “maximize” and “utility.” The choice between cancer researchers and terrorists is easy. But if we dive into more nuanced examples, the choice point will be informed by a combination of weighted value criterion and this is where it gets tricky.

First, what are the value criterion at hand? Second, how do you attribute relative weight to the different criterion? Top of mind, I can think of a number of criterion to consider: (1) age, (2) health, (3) demographics, (4) income. What about (5) cost of the accident, i.e. what taxpayers would have to subsidize in rebuilding roads and other infrastructure? How does the split of where these dollars come from — state, federal or private — affect the decision? How about (6) the dynamics of the actual accident — should we factor in who was wearing a seatbelt? How about the more subjective variables like (7) “current contribution to society” or (8) “future potential.”

It gets even more complicated with the “outside-in” implications: there are a number of additional considerations when evaluating the surrounding parameters these criterion may play in.

Data control puts an inordinate amount of power and leverage in the hands of a few select companies. How does this control affect policy outcomes? Maybe cities eager to bring self-driving cars to their town “turn a blind eye” because of the immediate gains they can reap from reduced auto casualties and increased job growth. What about the opposite side of the spectrum — governments that want to be overly involved in defining these value criterion may enact regulation prohibiting businesses from entering their state unless they are able to influence the algorithms.

And does this mean self-driving cars will operate under different parameters state by state? What about country to country? It seems inconceivable to think that the inherent notion of being in a self driving car puts you at different levels of risk in Georgia vs. New York or in the U.S. vs India. What does this mean for the future of travel, consumer protection and insurance?

So what’s the net-net of all of this? The technology is clearly there; self-driving cars will become a reality over the next decade and promise massive benefits in the form of aggregate casualty reduction, job growth and optimized land use. What is unclear is what we will tolerate for these gains.

This will be the space race of the next decade. The stakes are high; the countries and companies that figure this out are going to have a massive systematic advantage over the rest of the world.

Going forward, as we engage with AI, we face a tricky balance: we want to bake our value system into AI, but we also have to figure out how AI can rise above our shortcomings. The last thing we want to do is codify a system that places value on human life.