Sharing Our Lives with Robots:
Will We Remain in Control?
What decisions should artificially intelligent
robots be allowed to make?
This is not a technical question for engineers and computer scientists, but one for society at large.
Amidst the growing debate around artificial “Superintelligence”, Singularity, and robotics, I frequently hear worries that humans will become obsolete. Working as a researcher in robotics, I am often personally confronted with this concern about robot autonomy, one that reaches well beyond the academic circles I attend.
In particular, one question that keeps coming up is whether we humans will remain in control of the decisions governing our lives, or if eventually robots will “take over” and “act on their own”.
However, the question of control between man and machine is not a binary one, but a gradient between having full control and having none at all. As a result, human societies need to think deeply and decide — for each particular case of artificial intelligence and robotics — what we want to retain control of, and how much.
In fact many roboticists, myself included, don’t see the future as the classic literary narrative of “humans against machines”, but instead envision a world in which humans and machines work together. To achieve this, we study the mechanisms of teamwork and companionship between humans and robots, both in the workplace and at home.
We often talk about future human-robot teams in professional settings. This could be a factory worker working shoulder-to-shoulder with a robot to manufacture or assemble a part, or a human nurse in a hospital supervising a crew of medicine delivery robots. It could be a school teacher having a robotic assistant helping students when they get stuck, or an office worker interacting with a messenger and meeting-coordinator robot.
We also envision domestic robotic companions; people sharing their personal lives with robots. Here, too, the applications are broad. Home robots could help humans with house chores, like cooking or cleaning, they could entertain them, or encourage them to exercise. Robots can assist people with hobbies such as carpentry or jewelry making, or help children with their homework and music lessons.
In both cases, that of work teammates and that of home companions, our research is inspired by the prospect of robots engaging people in long-term, tightly-coupled, and personal relationships.
We call this field of study “Human-Robot Interaction”, an exciting interdisciplinary research area investigating the interface between machines and people. After more than a decade in this field, I find myself personally most interested in the role that body language and timing play in human-machine relationships. I believe these concepts to be crucial to high-quality shared activities, and also as having an immense emotional impact on humans. This is based on the understanding that people are extremely sensitive to timing and nonverbal communication.
People want a robot to take initiative
and not just be a “dumb tool” that
does exactly what you tell it to do
In my experiments, I find that people prefer a robot that predicts what they want, and acts a little bit ahead of time, even when that means that the robot might make mistakes because it tried to guess what the human was about to do. Following that idea, we explore the idea of robot improvisation as a model for their interaction with people. According to my research, a robot could be seen as more intelligent, more committed, and more effective when it takes calculated chances, i.e. behaves in a less controlled, and more spur-of-the-moment way.
This is true especially in repetitive tasks. In our laboratory experiments we see that in these situations, people want the robot to take more initiative and not just be a “dumb tool” that does exactly what you tell it to do. We found this surprising, expecting that people would want robots to be as precise and predictable as possible and just fulfill human commands. Instead, we found that there are times when giving up control to a machine actually enhances people’s experience of their teamwork with a robot as well as their opinion of the machine.
In other experiments, we see the psychological reach of a robot’s body language and gestures. We find that a robot that seems to be enjoying music (by dancing to it and tapping its foot) causes people to think that a song is better than the same song when listened-to with a robot that just moves unrelated to the music. In other words, people’s opinion of the song was influenced by the robot’s perceived enjoyment. In another set of experiments, we have people talk to a robot about an emotionally difficult, personal event. We see that our participants feel more positive about the robot when it reinforces their storytelling with subtle body gestures and short sentences suggesting that it cares about them and their experience.
In some cases a robot’s mere presence can change people’s behavior. In a recent study, we put people in a situation where they could make more money by not complying with our instructions on a repetitive boring task. We find that a robot in the room, looking at a person every once in a while, causes the person to be more compliant to our instructions. In fact, we compared this effect to having another human person in the room, also occasionally looking at the study participants. The robot was equally effective in changing the human’s behavior as the other human was.
People project social and interpersonal beliefs
into the robot and think about it similarly
to the way they do about other humans
What we learn from these studies is that humans are willing to suspend their disbelief to some extent when it comes to robots. They project social and interpersonal beliefs “into the robot”, and in some ways think about it similarly to the way they do about other humans. This could be why, when they do something repeatedly with the same robot, they expect the machine to take initiative, and to make its own judgment about the situation, instead of just waiting for their commands. In fact, I believe that this is an inherent part of our expectations from artificial intelligence.
This brings us back to the original question: How much control are we willing to give up, and how much decision-making should a robot be allowed to do. As I mentioned above, I often get asked this question as a roboticist. But, in my opinion this is not a technical question for engineers and computer scientists, but one for society at large.
First, we have to understand that there is no clear borderline between mechanics and free will. An automatic door opening when someone comes near is — in some sense — a robot making a decision to let the person in. Now imagine this robot enhanced by a camera and face-recognition software and programmed to prevent the entry of recognized shoplifters. Taking this idea one step further, the door could prevent the entry of people who are classified by a machine learning and pattern recognition algorithm as having a high likelihood of being shoplifters, or even of just having bad credit.
While this kind of profiling could be connected to overtly racist or sexist stereotypes, it could also be based on a set of perceptual features which do not readily correspond to human-readable statistical properties. In both cases, the door’s “decision” is still just a mathematical mechanism, but now it suddenly became a discriminatory one.
People will probably not hand over
the decision to end life support for a family
member to a machine or an algorithm
Therefore, humans have to decide how much of the decision-making process they are willing to forgo, and in what situations. Today, with GPS, users are giving “intelligent” software the power to decide which way to take when they drive home. This is a decision they could make, but choose not to. However, I do not think people will give the decision of whether to end life support for a family member to a machine or an algorithm.
In any case, control is not a yes-no question, it is a gradient, and we have to decide, on a case-by-case basis, what we want to retain control of, and what we are comfortable giving up.
To look at these questions more seriously, we have started an initiative in Israel called “Robots in Human Society”, together with Professor Dan Halperin and Lior Zalmanson from Tel Aviv University. There are similar initiatives in other countries, for example the RoboEthics database, the We Robot conference, or the open letter on Research Priorities for Robust and Beneficial Artificial Intelligence. Ours is an annual forum where we discuss questions of automation, robotics, and control with experts from a variety of fields from literature and ethics to law and social sciences.
The forum reflects my strong belief that we are facing questions in which every facet of society needs to engage. In our meetings, we encounter legal questions of robots’ responsibility for their actions; we discuss cultural questions of the relationship between AI and gender; and raise economic questions about the future labor market in an increasingly automated society. For example, economists expect that the introduction of robotic team members will cause more income inequality, which will most acutely be felt by lower-income socioeconomic groups, who are not part of this conversation today.
Some of the biggest questions related to AI
and robotics are not technology or
engineering related, but societal and cultural
In sum, the question of control between man and machine is not black and white. Giving up some control can make robots better teammates and better companions, and could free us up to make other, more meaningful, decisions. That said, some of the biggest questions related to AI and robotics, including how much control we want to give up, are not technology or engineering questions, but societal, cultural, and personal, and should therefore not be left solely to the robotics departments.
I would like to thank Noa Morag, Heather Knight, Julia Fermentto,
and Lior Zalmanson for valuable comments to improve this text.