When an AI Finally Kills Someone, Who Will Be Responsible?

Legal scholars are furiously debating which laws should apply to AI crime

MIT Technology Review
MIT Technology Review

--

Here’s a curious question: Imagine it is the year 2023 and self-driving cars are finally navigating our city streets. For the first time one of them has hit and killed a pedestrian, with huge media coverage. A high-profile lawsuit is likely, but what laws should apply?

Today, we get an answer of sorts thanks to the work of John Kingston at the University of Brighton in the UK, who maps out the landscape in this incipient legal field. His analysis raises some important issues that the automotive, computing, and legal worlds should be wrestling with in earnest, if they are not already.

At the heart of this debate is whether an AI system could be held criminally liable for its actions. Kingston says that Gabriel Hallevy at Ono Academic College in Israel has explored this issue in detail.

Criminal liability usually requires an action and a mental intent (in legalese an actus rea and mens rea). Kingston says Hallevy explores three scenarios that could apply to AI systems.

The first, known as perpetrator via another, applies when an offense has been committed by a mentally deficient person or animal, who is therefore deemed to be…

--

--

MIT Technology Review
MIT Technology Review

Reporting on important technologies and innovators since 1899