Advance directives for autonomous car drivers

Lately, I’ve been following the advancement of autonomous cars and the ethical dilemmas associated with them. Specifically, variations of the Trolley Problem where decisions about whose life should be saved in an accident is predetermined by the automaker rather than the driver at the moment of the accident. For example, in the case of an unavoidable accident where the car either runs into a crowd of kids or swerves suddenly, potentially killing the driver, automakers have vowed to take the action which always saves the driver. This makes sense from a business perspective—who would buy a car that chooses to kill you in an accident? But from a social perspective, wouldn’t it be better to react in a way that preserves the most lives? And should the government enforce that principle?

In a paradox of sorts it’s probably better for governments to let automakers carry on with the driver-first mentality. Reason is autonomous cars are way better drivers than humans and many more lives will be saved overall once they become mainstream. And selling a lot of them will hasten the change.

Opinions vary greatly as to how long it will take until driverless cars gain widespread adoption. But the technology is rapidly evolving as regulators and policymakers attempt to keep up. Getting back to the Trolley Problem I’m going to float an idea I haven’t seen anywhere else…say someone is going to buy an autonomous car and explicitly wants to save the most lives in the case of an unavoidable accident—even if it means losing their own life. Perhaps that person could—during the purchase contract process—have an option to state just that! An advance directive of sorts that changes the default programming of the vehicle so that instead of always saving the driver no matter what, the car would be programmed to save the highest number of lives possible—no matter what. What do you think?

Thanks for reading! Please press the like button and leave comments below.