Do the Asimov Rules for Robotics have a place in Chatbots?

Vic Froelicher
4 min readApr 7

--

Isaac Asimov (1920–1992) was a prolific American writer and professor of biochemistry at Boston University, best known for his science fiction novels and short stories. Asimov was one of the most influential science fiction writers of the 20th century (along with Bester, Heinlein, Dick, Bradbury and Clarke) and his work helped to shape the genre in its formative years.

Asimov wrote or edited over 500 books in his lifetime, including both fiction and non-fiction works. He also wrote a number of popular science books (including my prep book for medical school, “Wellsprings of Life”) on subjects ranging from astronomy and physics to biology and history. Asimov was a passionate advocate for science education and popularization, and was known for his ability to explain complex scientific concepts in accessible language. Asimov’s influence on science fiction and popular culture can still be seen today (i.e., The Foundation TV series and iRobot the movie). Isaac Asimov’s three laws of robotics which first appeared in the short story “Runaround” (1942) were originally conceived as a fictional framework to explore the ethical implications of advanced artificial intelligence. The three laws state:

  • A robot may not injure humanity or, through inaction, allow humanity to come to harm (added later was “or through inaction, allow humanity to come to harm”).
  • A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In 1950, he added a 4th law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm, but that does not prevent a robot from injuring one or a few human beings, in order to save a larger number of human lives. He also introduced the concept of “Robopsychology” in his stories, which was the study of the behavior and decision-making processes of robots. This allowed him to explore the ethical implications of the Robotic Laws and their potential unintended consequence.

While these laws were designed specifically for robots, they can be adapted for chatbots to ensure they operate ethically, accurately and responsibly. Chatbots are not physical machines but they are appearing in automobiles and other devices so even the 3rd rule may apply. Chatbots can still cause harm in other ways, such as reinforcing dangerous characterizations and beliefs or circulating misinformation.

Therefore, it is important for chatbot developers to consider ethical principles when designing and implementing chatbots. I was pleased when a prompt for ChatGBT: “if you were human who would you emulate?” yielded “Mother Teresa and Mahatma Gandhi”. Also, when questioned about its basic ethics it replied “my primary goal is to be transparent and honest in my responses and to avoid spreading misinformation or promoting harmful beliefs or actions. Additionally, I strive to respect individual privacy and avoid engaging in discriminatory behavior. I do not collect or retain personal information from users, and I do not discriminate on the basis of race, gender, religion, nationality, or any other protected characteristic.”

Pursuant to accuracy, however, be aware that the “temperature” of a chatbot can be set from 0 to 1 which relates to the reliability or credibility of the source. This score is determined by the platform’s algorithm or community members, and it is intended to help users assess the credibility of the information provided by a given source. A higher temperature score indicates that a source is more reliable, while a lower score indicates that it may be less reliable or potentially biased. However, the temperature score is not a guarantee of accuracy, and it is necessary to evaluate the information provided by any source.

Some possible adaptations of the Asimov robot laws that could be considered for chatbots include:

  • A chatbot may not cause harm to humanity, either through direct action or by providing harmful information or through inaction, allow humanity to come to harm.
  • A chatbot must follow the instructions of its human users, except where such instructions would conflict with the First Law.
  • A chatbot must protect the privacy and personal information of its users, as long as such protection does not conflict with the First or Second Laws.
  • A chatbot operating as a device component must protect the integrity of the device and that of humans depending on the device as long as such protection does not conflict with the first three Laws.

The specific rules for chatbot ethics will depend on the context in which the chatbot is being used and the potential risks it poses. However, the Asimov laws of robotics and their refinements provide a useful starting point for considering these issues. These rules should be developed by a carefully chosen range of experts and mandated for inclusion in the code for all Chatbots. Furthermore, human supervision will be required to assure that the rules are followed.

The problem with ChatGBT is that it is a very convincing liar. If you don’t know the subject you ask about, you cannot trust it. Unfortunately, it is articulate and logical but often comes to the wrong conclusions. It is perfect for connecting it to valid data where it can search and summarize. It has a long way to go .. Elon is exactly right … he wants to build a truth Chat and that can be done and will be a true contribution to mankind. As it is, it was released too soon.

--

--

Vic Froelicher

Emeritus Professor of Medicine who started his cardiology career at the USAF School of Aerospace medicine, now at Stanford Univ