Science Fiction Frames: Calibrating AI’s Moral Compass

Human and Robot. Two totally alien beings, a creator and a creation, trying to bridge the irresolvable gulf in between.

Artificial intelligence (AI) is popping up all around us. Siri and Alexa are learning by interacting with us to carefully curate our experience of cyberspace, and driverless cars are beginning to hit the streets alongside their human counterparts. Heck, Google has just created AI that can keep secrets from humans. But as AI begins to take on humanlike roles and personas, scientists and technologists face a whole new crop of thorny ethical challenges.

Let’s take a helper-bot, for example. It’s designed to help a person keep active and maintain cleanliness and order in the home. On the surface this seems pretty straightforward. Use GPS to navigate the neighborhood, understand how to use household appliances, and so forth. But beneath the surface, what keeps an individual organized, healthy, and happy can be complicated. This is humorously depicted in the 2012 movie Robot & Frank.

Robot (like Frankenstein’s creature, he doesn’t have a proper name) is hired to assist Frank in his daily endeavors. But it turns out that Frank’s leisure activity of choice is burglary — it’s seemingly the only thing that keeps him motivated toward any physical or intellectual challenge. So in this case, providing aid requires conspiring an epic jewel heist. To win the heart of a girl, no less.

Suddenly, Robot is forced to operate between two seemingly disparate mindsets. The first is optimizing patient health and well-being, which validates Robot’s programmed purpose. The second is obeying moral norms and the legal system, which define Frank’s behavior as deviant. The conflict between these two paths suggests that some ethical process must be in place to determine if the former can justify the latter.

This internal struggle — to weigh designed purpose against moral reasoning — is something that AI will need to understand. Should a driverless car protect the lives of the few people at the expense of a crowd gathered on a street corner? Should Siri automatically contact your doctor or spouse if your health data becomes abnormal? These are big questions that AI creators and implementers must carefully ponder.

An even bigger question is whether scientists and technologists ought to program these moral decisions or leave AI to think, learn, and decide on its own. But perhaps we should save the question of whether humans or robots are best fit to make moral judgments for another day. In the meantime, watch Robot & Frank to see how Robot handles the ethical gymnastics of caring for an inveterate larcenist.


This piece is part of Science Fiction Frames: a series of incisive analyses, thoughtful meditations, wild theories, close readings, and speculative leaps jumping off from a single frame of a science fiction film or television show. If you would like to contribute to the series or learn more, email us at imagination@asu.edu.