Will Asimov’s Laws of Robotics Be Applied to Self-Driving Cars?

The self-driving car is perhaps the first practical and useful application of the intelligent personal robot that many people will own, interact with, use regularly and is not a toy. It is also one of the first useful applications of artificial intelligence (AI). I wonder if we will require these robots to follow Isaac Asimov’s Laws of Robotics (also known as The Three Laws of Robotics)?

Asimov was a writer, mainly of science fiction, and a biochemistry professor. He wrote about 500 books; mostly science fiction, but also science fact, history and about religion. He was a professor of biochemistry at Boston University and earned a Ph. D. in biochemistry from Columbia University in New York. He died in 1992.

Asimov is one of my favorite science fiction authors and I have read many of his non-fiction books as well, including one that helped me with engineering school. Asimov’s Laws of Robotics were first introduced in the 1942 short story Runaround. I first read them in his novel The Caves of Steel when I was a teenager.

Asimov’s Laws of Robotics are intended to be programmed into all robots for the safety of humans and cannot be by-passed. Below, written in order of priority for the robot, are Asimov’s Laws of Robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

If these Laws of Robotics were used in self-driven cars they would be programmed into the self-driven car and may not be changeable by the owner without the owner converting to manual driving mode.

Source: Americas Electric Light and Power Companies

I have a few questions about self-driving cars and Asimov’s Laws of Robotics:

* Is the safety of the humans inside the car a higher priority than humans outside the car? The Laws of Robotics do not address setting a priority of which humans do not harm.

* What decision would the self-driving car make if there were an unavoidable choice between running into a crowd of ten people or driving off a cliff killing only the one passenger? The Laws do not have a scale for the quantity of humans to not harm.

* What would the programming be if the car was not owned by the passenger but is a rental car or taxicab? Would the rental car company or the taxicab company be allowed to specify that the programming makes such decisions in a way that reduces their legal liability independent of the moral implications? One can imagine a lawyer helping to define the way the Laws of Robotics are implemented in their client’s fleet of self-driven cars for the business and legal benefit of the client.

* Is speeding or breaking a minor traffic law ever acceptable for a self-driven car? This question is important because there are times when a person is in a situation where the best way out is to violate a minor traffic law. Would a self-driven car’s programming allow it to do the same even though this may momentarily create a minor risk for the passengers?

Asimov’s Laws of Robotics have influenced not only other science fiction writers but also scientists and engineers who are developing robotics and AI. It is not guaranteed that AI robots will follow Asimov’s Laws of Robotics because they have to be programmed in a way that the laws cannot be removed or altered. It is all in the programming.

The award-winning science fiction writer, Robert J. Sawyer, wrote in the early 1990s on his web site,

“First, remember, Asimov’s “Laws” are hardly laws in the sense that physical laws are laws; rather, they’re cute suggestions that made for some interesting puzzle-oriented stories half a century ago. I honestly don’t think they will be applied to future computers or robots. We have lots of computers and robots today and not one of them has even the rudiments of the Three Laws built-in. It’s extraordinarily easy for “equipment failure” to result in human death, after all, in direct violation of the First Law.” Clearly, Mr. Sawyer does not think that the Three Laws will be used in the real world.

Robin R. Murphy, Texas A&M University and David D. Woods, Ohio State University wrote in Beyond Asimov: The Three Laws of Responsible Robotics in IEEE Intelligent Systems,

“The three laws have been so successfully inculcated into the public consciousness through entertainment that they now appear to shape society’s expectations about how robots should act around humans. …Even medical doctors have considered robotic surgery in the context of the three laws.” They also propose their own different Laws of Robotics, which are more focused on human actions.

We have many robots that do not follow Asimov’s Laws of Robotics like drones that are used by the military in combat. They are designed to kill people or bomb buildings at the direct instructions of their human controller. But is this drone really different that a gun? The drone certainly has more technology but it is still just a gun or an automated airplane dropping bombs. Is it really an AI robot? It is not what Asimov was thinking about when he wrote his Laws of Robotics. He had in mind an AI robot that was capable of making decisions independent of humans (see the Second Law). The military drone as used today does not make independent decisions, as far as I know.

Norman Bel Geddes, the 20th-century industrial designer, futurists, and pioneer of streamline design wrote in his 1940 book, Magic Motorways, “These cars of 1960 and the highways on which they drive will have in the devices which will correct the faults of human beings as drivers. They will prevent the driver from committing errors. They will prevent his turning out into traffic except when he should. They will aid him in passing through intersections without slowing down or causing anyone else to do so and without endangering himself or others.”

Bel Geddes sees a future, looking from 1940 to 1960, where the self-driven car will protect humans from themselves although he does not describe fully the Laws of Robotics. He was the creator of Futurama, a diorama sponsored by General Motors at the 1939 New York World’s Fair, which forecasts the future for automobile transportation and was the most popular exhibit at this World’s fair. His vision was not realized by 1960 but is being implemented by car technology developers now.

In order for a self-driven car to implement Asimov’s Laws of Robotics, a robot would need to be super intelligent even more so than the computers that have mastered the game of chess to where they now routinely beat the best human chess players in the world. Chess is a game with set rules and a limited number of options and outcomes. A self-driven car has significantly more options and decisions to make especially if the Laws of Robotics are to be followed.

Asimov created the fictional “positronic brain” for his robots that gave them the ability to think and act nearly human. This brain is also necessary to implement Asimov’s Laws of Robotics because the computational task would otherwise not be possible. The timing of the robot decision-making is crucial. Latency in making life and death decisions is not acceptable especially in a real-world car-driving situation.

Here are some examples of how self-driven cars today may mimic Asimov’s Laws of Robotics and some possible problems with following these Laws:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

There are many safety features in modern cars designed to protect passengers such as air bags, lane change assistance, automatic braking, cameras, and many others. But what about the people outside the car? Some of these features may protect them but others may not. The focus has been on protecting the passengers and this First Law does not specify which human beings are prioritized for protection.

2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

Which human being’s orders will the self-driven car follow? Let’s assume it will follow orders from the driver but what if the driver is incapacitated will the car follow the orders of other human beings such as passengers or people outside the car?

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Modern cars are not yet programmed to protect themselves except as a by-product of protecting the passengers. This will likely be the last of the Laws of Robotics to be implemented because it is quite complex and is not nearly as important as the first two laws.

Conclusion

I do not believe we have the technology to implement Asimov’s Laws of Robotics in self-driven cars for the foreseeable future. There are, however, certain aspects of the Laws of Robotics that can be implemented through software and hardware programming, although the self-driven car will not be aware it is following Asimov’s Laws of Robotics. It will not be making decisions but merely following its programming and design features.

Maybe Asimov’s Laws of Robotics are not intended to be followed literally but are simply a design guideline.