Asimov’s laws are flawed and irrelevant

Jakub Mamulski
Jan 15 · 6 min read

Isaac Asimov, although quite an influential person in the pop culture, have never came up with anything relevant for the world of robots. He was a sci-fi writer, who invented, what is generally referred to as, three laws of robotics. Why do we think that, despite being deeply embedded in the lore of his creations, they were never the case when it comes to real-life robots?

They aren’t actual laws

The simple reason for Asimov’s laws not being actual laws is that they have never been established as actual laws by any state, which need to be abided by. Whether it’s a parliament, a dictator or judicial system, postulates, which would later on become laws, have to be accepted by either of them, published and so on.

This has never happened in the case of Asimov’s laws. They can be considered as moral guidelines, but that’s pretty much it. It’s not wrong to follow them, but calling them laws and trying to enforce others to put them in the equation simply isn’t ok. Just because one set of rules works for a person, it doesn’t mean it’s going to be universal. This brings us to the second point.

They can hinder creative processes

Let’s say that Asimov’s laws are in power. It effectively rules out , whether we like it or not, big contributors to research and development which are military, police forces and penitentiary institutions. They often harm people for one reason or another, but that’s not the point. We need to look at the bigger picture, so let’s use a nuke as an example.

Was it used to cause harm and kill people? Yes, it was.

Was the research done with them helpful when it comes to nuclear power plants and using uranium as fuel in submarines? Yes, it was.

A less drastic example and even more accurate example of military technology being adapted to customer market would be the creation of Willys MB a.k.a. Jeep. It was first developed to be a military vehicle.

The point is that military could develop a Terminator-esque combat robot meant to do harm, but the technology used in legs could later on be used to create quick bipedal courier robots, which wouldn’t tire and would fail less than their human counterparts. It’s sad, but war and suffering create massive opportunities for technology researchers, since governments would most likely spend more money on them under such circumstances, than during peaceful times.

This fella’s legs are the interesting part

Introducing stuff from sci-fi novels into reality might seem silly

Isaac Asimov was a sci-fi writer. As we all know, it’s fun to read a writer’s works knowing that the lore in them is consistent and that they paint a bigger picture together. Let’s f.e. look at Star Wars. We don’t want to express our opinions on Disney’s movies and we’d like to talk about the times before the release of the 7th part. Having been developed since the 70s, the lore of Star Wars was so big, that there were books describing it. I have a few books about technology and races of sentient and non-sentient creatures in the lore. It’s fun to find out that Wookies have been consistently portrayed as huge and hairy, because Kashyyyk, their home planet, forced them to evolve this way and that there were a plenty of books talking about this place and their species. Asimov understood that having consistent lore creates reader’s satisfaction, thus all the robots in his works abided to the laws. Taking something straight out of sci-fi novels and trying to make it a part of reality, often times not knowing what the technology is capable of and what the market demands might turn out to be a flop or an oddity. Imagine trying to call all the people “muggles” after Harry Potter series. That just doesn’t feel right. We’re not saying that such works of art aren’t visionary and that they can’t influence the actual world, but using their fragments as they are poses many difficulties and simply might not work well.

Manufacturers should have a voice, too

We’ve read an article (here’s the link — give it a read, it’s a well-written opinion) in which it was stated that scientists should have a strong voice when it comes to creating laws and guidelines regarding robots, and that they might expand Asimov’s laws. We’re not saying that the laws will never be implemented in one form or another, but when it comes to legal regulation, theorists don’t always grasp what is the current state of things and technology, thus manufacturers should also be included in the process of lawmaking if laws are actually going to be developed. Honestly, who’d like all the laws be made exclusively by theorists or, even worse, politicians? Back to the point, the linked article mentions Marc Rotenberg’s (president of EPIC) opinions on adding 4th and 5th laws of robotics:

Fourth Law: Robots must always reveal their identity and nature as a robot to humans when asked.

Fifth Law: Robots must always reveal their decision-making process to humans when asked.

What Mr Rotenberg missed is that if those were to become laws, it would mean that all robots, even cobots or bomb-defusing robots should have some sort of voice recognition or a keyboard for input, a communication module (which could be both hardware and software) and a way to express the information. It would not only increase costs, it would also make the designers focus on stuff, which is useless in the field they want their robots to be used. Such ideas are often detached from the reality and technological possibilities as of now.

We don’t really see the need for Universal Robotics’ creations to have speech modules

What about platforms?

We at Turtle Rover wonder how these laws would apply to our rovers. They are, after all, a platform for building robotic projects. This means that one could make a remote mini snow plow with it and it would do as fine as second person’s project which would be a small and autonomous spy-like robotic assassin. Who is to take responsibility for users’ actions? Is it exclusively the users, or shall we also be persecuted because we provided a platform for building such devices?

The laws don’t answer such questions and they don’t expect platforms to be a thing. Thus, Turtle Rover and ROS would be taken out of equation, wouldn’t they? They are not, at least in their vanilla state, devices which could harm human beings, but they surely would be able to do so after modifications. This does not only create a massive question when it comes to responsibility, but also what I’d call a gray zone of activity, which wouldn’t be controlled by Asimov’s laws.

What do Turtle Rover’s Twitter followers think about it?

We have asked our Twitter followers about the relevance of Asimov’s laws, and one of them responded with a short, yet extremely accurate tweet:

We’re not sure why we got so little tweets, but that’s ok (maybe that’s a time zone thing?). Jotarun made a point about technology not being advanced enough to be effectively regulated by Asimov’s laws and we agree with that statement. We’d also like to end the post right here. What are your opinions on the topic? Feel free to share them in the comments.

Leo Rover Blog

It's a great knowledge base both for Leo Rover Community and mobile robotics enthusiasts. Learn how to design, what is inverse kinematics, how to connect Turtle to other devices, and much more!

Jakub Mamulski

Written by

Community Manager at Turtle Rover. Cyclist, hipster, amateur musician and a jack of all trades.

Leo Rover Blog

It's a great knowledge base both for Leo Rover Community and mobile robotics enthusiasts. Learn how to design, what is inverse kinematics, how to connect Turtle to other devices, and much more!