If an automated car had to choose between crashing into a barrier, killing its three female passengers, or running over one child in the street — which call should it make?

When three U.S.-based researchers started thinking about the future of self-driving cars, they wondered how these vehicles should make the tough ethical decisions that humans usually make instinctively. The idea prompted Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan to design an online quiz called The Moral Machine. Would you run over a man or a woman? An adult or a child? A dog or a human? A group or an individual?

By 2069, autonomous vehicles could be the greatest disruptor to transport since the Model T Ford was launched in Detroit in 1908. Although 62 companies hold permits to test self-driving cars in California, the industry remains ethically unprepared. The Moral Machine was designed to give ordinary people some insight into machine ethics — adding their voice to the debate often limited to policy makers, car manufacturers, and philosophers. Medium spoke to Bonnefon and Shariff about what their results tell us about one of the future’s greatest moral dilemmas.

This interview has been edited and condensed for clarity.

Medium: What’s the difference between the way humans and machines make moral decisions?

Jean-François Bonnefon: Moral decisions made by humans are influenced by so many things. People react on the basis of hormones, but they don’t do that in a reasoned way, not when they make decisions really fast. You and I could sit and spend one hour discussing how we would like to react. But there’s no point, we cannot program ourselves to do what we would like to do because we act on instinct. But with machines, we can tell them what we want them to do.

Azim Shariff: Machines have the luxury of deliberation that humans don’t have. With that luxury comes the responsibilities of deliberation as well.

What did The Moral Machine tell you about how we want machines to act?

Azim Shariff: All countries had a preference for saving young people over old people and almost everywhere had a preference for saving women over men. While the preferences all went in one direction, there were differences in how strongly countries felt about those choices. In Eastern countries, such as Japan, there was less preference for sacrificing the old in order to save the young. But that preference was turned up in the West.

Jean-François Bonnefon: It is perhaps more interesting to look at macroeconomic factors. All countries showed a disturbing preference for saving people of higher status [for example, executives over homeless people]. We found this is quite strongly linked to the level of economic inequality in a country. Where there was greater inequality, there was greater preference to sacrifice the homeless.

How much attention should self-driving car manufacturers pay to your results?

Azim Shariff: That’s the critical question — how much should we follow what the demos are saying here? If the public wants us to do one thing, is this how cars should be ethically organized? From a purely demographic standpoint, we should give the public what they want. But there are people who are trained to think through ethical decisions and maybe we should listen to them instead. I don’t think we should default simply to what the majority says, but I do think it’s useful to know what the public prefers. It suggests what kind of pushback we’re going to get for moral codes imbued into the cars.

In an earlier study, you found that people thought an autonomous vehicle should protect the greater number of people, even if that meant sacrificing its passengers. But they also said they wouldn’t buy an autonomous car programmed to act this way. What does this tell us?

Azim Shariff: People recognize it is more ethically responsible to save more lives. But people are self-interested, and it might be a hard sell to do what’s ethical. When Mercedes-Benz said that if they could only save one person, they would save the driver and not the pedestrian, public outrage made them retract that statement. This demonstrates an interesting dilemma for car companies. If you say your car would preference the passenger, there will be public outrage. If you decide to treat all life equally but imperil the life of the person who bought the car, you might actually take a hit to your bottom line, people might not buy that car. From the manufacturers we’ve talked to, they want the decision taken out of their hands and they want regulation.

Should the U.S. government follow Germany’s example and issue ethical guidelines for self-driving cars?

Azim Shariff: I do think that’s a good idea. The advantage of government involvement is you don’t leave ethical decisions up to market. I don’t trust the market to find the most ethical discussion. Having each manufacturer create their own guidelines, so consumers can choose between algorithms, would be very chaotic.

Are people’s values closely aligned enough to achieve universal guidelines?

Azim Shariff: I do think that would be possible. People agreed on the general points, such as saving more people rather than fewer. But there’s some variation, for example how you should treat people who are jaywalking. That is probably due to inconsistencies in how those norms are enforced within different cultures. In Japan, jaywalking is not a normal thing to do, but in Delhi or New York, people jaywalk all the time. That might be a tricky one and cars might have to be responsive to this somehow.

While it would be possible to achieve universal ethics, I don’t think it’s necessary. When I was bringing a car from Canada to the U.S., I had to modify the headlights. That’s harder with hardware than it is with software. With the self-driving car software, it would be possible to switch the algorithm to local customs. I don’t think it would be a huge problem to have some cultural variations.

An engineer working on Google’s own self-driving car project said your results were not so significant because the real answer in these scenarios would almost always be “slam on the brakes.” Are The Moral Machine’s scenarios actually relevant to real life?

Azim Shariff: These situations are going to be incredibly rare. What is certainly more immediately relevant is how every small decision that the car makes is going to shift risk. If the car gives more space to children, that decision will put the driver at higher risk. This small decision might not end in fatalities but the question we’re going to have to ask is how should the car distribute risk between people. That question is going to much more complicated, but also much more urgent.