Driverless Cars, The Law and the Trolley Problem

CM30
ART + marketing
Published in
3 min readMay 21, 2016

Note: I am not a lawyer and the following article is entirely hypothetical. If anyone is a lawyer, this could be a really interesting subject to delve into and explore, especially given the rising popularity of autonomous vehicles.

When it comes to any discussion about driverless cars, some version of the old ‘trolley problem’ will probably be brought up. Namely, if the car is going at a speed that’s too fast to stop, and it has a choice between hitting another car or person (and hence killing them) or hitting a wall (and killing its own ‘driver’), what should it do?

It’s an age old philosophy problem that’s been going around in modified form for hundreds of years, and there are tons of possible answers. Like somehow calculating the ‘value’ of one group of people’s lives and another and choosing between them in about half a second, like a twisted form of the Hedonic Calculus. Or deciding that the driver/owner/passengers are simply more important than the bystanders. Or trying to avoid doing anything, because this goes against the Categorical Imperative and hence Kant would consider it a bad thing to choose in any situation.

But we’re not interested in the philosophy problems here. They’re fascinating, yes. But they’re perhaps not as worrying as the legal issues related to these types of situations.

What do I mean by legal issues?

Well, imagine if this situation plays out. The car does one thing, and either kills the passengers or the people outside it.

What happens? Well, in today’s sue happy society, someone will probably sue someone else over the loss of life. Like the families of the killed bystanders suing the company because the car ‘chose’ its occupants lives over theres. Or the families of the passengers suing the company because the car chose the lives of the outsiders over theirs. Or heck, the groups sue each other, because that’s pretty likely too.

So now what?

Because we now have a massive dilemma. Somehow, a jury has to decide whether the car was right to kill the people it did in order to save the others. Somehow we have to go through weeks or months of court appearances and trials to figure out whether in this circumstance, the car was doing the right thing. How does product liability factor into this? Is the company at fault for programming the car to do what it did here? How about the people who coded the software?

Either way, it’s a minefield. And it doesn’t stop there. Because whether you like it or not, the law is nowhere near identical in other region of the planet. Every one of the 50 US states might come to a different decisions about how a driverless car should act, who would be liable and what not. As might all 196 countries that currently exist in the world.

What then?

Does the car need to somehow find out all this legal information in case of a deadly crash? What if you drive from the UK to France? Or the US to Canada? And the car then does what’s required in the former but not the latter? Are we going to see a whole lot more awkward court cases involving autonomous vehicles?

And let’s face it, we can’t tell what’s going to happen here until something goes horribly wrong. Many laws are brought in as a response to incidents and problems that have gotten more common in society at that point, and it’s unlikely a group of lawmakers are sitting around and deciding how the legal framework will deal with accidents involving driverless cars and unavoidable collisions.

Either way, it’s an interesting problem to consider, and one that’s going to have be considered very, very carefully by anyone manufacturing these vehicles.

--

--

CM30
ART + marketing

Gamer, writer and journalist working on Gaming Reinvented.