May We Be Cruel to Robots?

Rabbi Shmuly Yanklowitz
5 min readMay 2, 2018

--

Source: Wikimedia Commons

“What is your itinerary?”
“To meet my maker.”

The above-cited quote is featured in the climactic scene of the first episode of Westworld, HBO’s riveting series based on the Michael Crichton film. Since its premiere in 2016, Westworld has used its provocative backdrop of an Old West theme park filled with life-like robots to explore the nature of human/synthetic human interaction. In Westworld, a guest to the park may decide to murder, rape, or abuse any robot within the park without consequence. After all, the “hosts” are only simulations of flesh and sinew. But, the series is premised on a larger meta-narrative: what does interacting and engaging in such a manner mean for the future of human morality?

A scene from HBO’s “WestWorld” (CC BY-NC-SA 2.0)

The answer is not as straightforward as it should be. For generations, philosophers and writers portrayed robots/artificial intelligence in various guises. In the popular imagination, robots have ranged from the good (Robby the Robot from Forbidden Planet,), to the malevolent (HAL 9000 in Kubrick’s 2001: A Space Odyssey), to the comic rogue (Futurama’s Bender ). Yet, while these figments of entrainment reflect the cultural zeitgeist, current society is at the point of robotic ubiquity. By 2023, consumer robots are projected to be a $15 billion industry, with companies like Amazon reportedly working on tech that will perform domestic service tasks. Indeed, we are already on the verge of more complex systems, such as self-driving cars, of which this year produced the first fatality for the tech in my home state of Arizona.

Source: Wikimedia Commons

Leaders in the field have begun to envision and predict the moral issues of normalized human/robotic interaction. While details are not well known, Google has an ethics board to work on these issues. Harvard Law School Professor of Internet Law Jonathan Zittrain indicated that the future challenges are less science fiction than conscientious oversight: “Our work is less to worry about a science fiction robot takeover and more to see how technology can be used to help with human reflection and decision-making, rather than to entirely substitute for it.” Peter Norvig, Google’s director of research, sees the primary challenge as ensuring artificial intelligence (AI) benefits all in society. No one can predict exactly how AI systems evolve and this uncertainty speak volumes for the new territory that humanity is driving towards daily.

I’d suggest five reasons why we should not act cruelly toward robots:

First, we assume it is proven that these robots don’t have the capacity for feelings, sentience, and consciousness. Presumably, we would all agree that with those capacities, they would deserve full rights. But it remains unclear how or where consciousness and sentience emerge or if it these are reproduced technologically. Paul Bloom (psychologist) and Sam Harris (neuroscientist) write:

If we did create conscious beings, conventional morality tells us that it would be wrong to harm them — precisely to the degree that they are conscious, and can suffer or be deprived of happiness. Just as it would be wrong to breed animals for the sake of torturing them, or to have children only to enslave them, it would be wrong to mistreat the conscious machines of the future.

Second, according to Aristotle, humans are creatures of habit. The more we engage in an act the more likely we are to continue engaging in such acts. We are not only concerned about building character but also about the increased likelihood of acts of cruelty toward others. Some suggest that we should vent out aggression through harmless exercises. But, actually, it is quite likely that the expression of an emotion does not release that emotion but instills it more deeply within oneself. Immanuel Kant argued: “For he who is cruel to animals becomes hard also in his dealings with men.” Do we really want a culture of male aggression, violence, harassment and rape to be legalized and normalized to a new level?

Third, at some point in the near future, it may become unclear who is a robot and who is a human, blurring basic ethics; warfare could be conducted, for example, on the premise that we are not truly killing human beings.

Fourth, robots will learn how to behave by what we teach them (and even model for them). If we expect the most advanced Artificial Intelligence to not harm us then we should not act cruelly to them.

Lastly: simple virtue ethics. In addition to weighing the uncertain consequences, we must consider how our actions affect in our inner life of virtue. Inevitably, one’s true character cultivated each day will emerge with clarity. While there may be no real consequence of pain to a robot, allowing a society to instill cruelty enables destructive behavior. True, one may hit boxing equipment and there should not be legal consequences. But once a being responds to pain, normative attitudes change and “rights” (however we decided to define them) must take hold.

Robots are no longer the future. They exist. In addition to their direct effect on humans, there is no question of how our interactions with robots will reflect the values of our society. The human/robot dyad is the most expansive branch of futurology. Ascribing an ethical component to it is this generation’s challenge and obligation: are we worthy makers? That question lingers.

Rabbi Dr. Shmuly Yanklowitz is the author of thirteen books on Jewish ethics.

(The opinions expressed here represent the author’s and do not represent any organizations he is affiliated with).

--

--

Rabbi Shmuly Yanklowitz

Activist. Educator. Author. Social Entrepreneur. Newsweek - “Top 50 Rabbis in America”; Forward - “50 most influential Jews”