Robots kill, and they’re just getting started — Gabriel Hallevy

Brain Bar
4 min readMay 31, 2016

For Gabriel Hallevy, one of the world’s leading legal thinkers in the emerging field of criminal law as it applies to intelligent machines, it started in a movie theater. The professor at Ono Academic College in Israel had already established himself as a prominent legal thinker in areas like criminal law, criminal justice, laws of evidence, and even corporate law when he sat down to watch I, Robot.

Meet Dr. Hallevy at Brain Bar Budapest

While the movie didn’t do much for Will Smith’s career, the seed it planted in Hallevy’s mind helped advance legal theory surrounding future crimes committed by intelligent machines to a point at which it’s now keeping pace with — if not out ahead of — the technologies themselves.

The book, “When Robots Kill,” might not have won the attention of Washington Post, Boston Globe and other mainstream reviewers had Hallevy’s working title prevailed. “A General Theory for the Criminal Liability of Artificial Intelligence Entities” just doesn’t quite have the same zip. But the notions within would have been fascinating — and important — regardless. He’ll be sharing these and other views at Brain Bar Budapest 2016.

Robots have been killers for quite some time. In 1979, a worker at Ford Motor Company earned the unfortunate distinction of being the first person ever killed by a robot when a machine smashed him with its arm at a storage facility. In 2007, a robotic military weapon killed nine and wounded 14 in a training accident in South Africa.

These sorts of catastrophes will become more common as robots increasingly enmesh themselves in our lives. Consider, for example, the autonomous vehicles the likes of Google, Fiat, Tesla, GM and many others are developing at grand prix pace. By all accounts, autonomous vehicles drive better than most people do. But what happens when, for example, such a vehicle is forced to decide between smashing into a back of a car with a Baby on Board sign or veering into a cyclist? People are going to get killed, and the legal system’s current mainstay — manufacturer liability, which depends on ineptitude or malfeasance in the production process — is ill-equipped to deal with it.

“Today we are in a vacuum — a legal vacuum,” Hallevy said. “We do not know how to treat these creatures.”

Hallevy suggests filling the vacuum by extending precedent. With a human murder, there must be a) murderous action and 2) awareness of that action, he says. Camera-eyed robots have met that definition since the 1980s, he says. He uses the robots used to assist guards in South Korean prisons as an example.

“They’re moving through the center of a prison, and when they see something that moves, they only have to identify if it is a prisoner who is trying to escape,” Hallevy says. If they identify such and alert human guards, Hallevy says, “This is awareness.”

As artificial intelligence advances, such awareness will increasingly expand and deepen, coming closer to imitating the human mind. And so, Hallevy argues, the best way to deal with robots committing criminal acts is to apply criminal law designed for human transgressors — or, by extension, corporate ones, perhaps a closer analogy. Criminal liability has been imposed on these nonhuman entities since the 17th century, he adds.

“Either we impose criminal liability on AI entities, or we must change the basic definition of criminal liability as it developed over thousands of years, and abandon the traditional understandings of criminal liability,” Hallevy says.

Where there is crime, there must be punishment. Society gains little in the way of deterrence or rehabilitation by putting an autonomous vehicle away for three life sentences. But you can’t do that with a corporation, either. You fine corporations, or you ban them from certain business activities — or shut them down entirely. The same could hold for a robot. Rather than allow a criminal robot to go about the activities for which it was designed, it might “help out in the community library, help clean the streets or other such things to contribute to the community,” Hallevy says.

As for the argument that taking the robot out of service is an undue burden to the owner, Hallevy responds with an analogy of a person walking his pet tiger down the street.

“It’s my property. Can anyone tell me not to use my property the way I want? Yes!” Hallevy says. If I am using robots in such a manner that may jeopardize the community, the community has the right to defend itself from the robot and the user of the robot.”

Hallevy says he’s looking forward to addressing these and other issues at Brain Bar Budapest.

“The festival is an excellent opportunity to discuss this topic and answer our deep concerns towards it,” he says. “I am looking forward to hard and thought-provoking questions, and will try to answer them. Besides, I have never been to Budapest before, and the festival will be a great way to take in the city.”

--

--