In Robots We Trust?

ACM TiiS
ACM TiiS
Published in
6 min readFeb 25, 2019

By Henry Lieberman
MIT Computer Science and Artificial Intelligence Lab (CSAIL)

Photo credit: Cindy Mason

In 2015, AI researcher Cindy Mason staged a Robot Fashion Show, at the upscale Stanford Shopping Center, probably the first time that such an event was ever held [Mason 15]. She dressed five identical robots as a superhero, a businessman, a housewife, a bride, and a religious monk. People’s reactions to them varied enormously, as they projected their expectations of the costumed roles onto the robots, in the absence of any real knowledge of how the robot was programmed. Mason proposed what she called the Fourth Law of Robotics: “A robot’s external appearance should be consistent with its decisions and actions”. Spectators’ inclination to trust the robots was correlated with their perceived trust in the human roles.

Should you trust a robot? As robots become more and more common in society, we will increasingly have to face that question. And there’s no one answer — just as with humans, sometimes it makes sense to trust, sometimes it doesn’t. The simplest answer is the one you would give for people: if the robot is trustworthy, then you should trust it. If not, not. But how do we decide if a robot is trustworthy?

Relative to AI programs that you might interact with on a screen, the question of trust for robots is much more strongly linked to our feelings about trusting people. As [Nass & Reeves 96] shows, we can’t help but use our social reasoning and social interaction skills, even when we are dealing with machines. And the more anthropomorphized the AI is, the more we will treat it like we treat people.

Relative to humans, there are factors that tend to make robots less trustworthy, and also factors that make them more trustworthy. Some people find robots scary. They have been frightened by apocalyptic science fiction movies, a distressing number of which rehash the Frankenstein theme: robots go berserk and kill people.

Others are put off by behavior that seems, well, “robotic”. They don’t like stiff, unemotional behavior. They don’t like the rigid programming of robots, which limits interaction with them. You can’t ask a robot why it did something. You can’t give it feedback, however constructive. You can’t even give it a request, however politely, that wasn’t anticipated by its programming.

On the other hand, there are some factors that might actually make a robot more trustworthy than a human in some situations. The robot may be programmed with knowledge, or computational capabilities, that you don’t have. The robot won’t have a personal agenda. It can’t be swayed by biases like racism and sexism.

The articles in this issue of ACM TiiS give you a nice snapshot of the current state of research on robotics and trust. Because robots are only now becoming practical for a lot of situations, and they are still not common in public life, we can expect the people’s perceptions of them will change rapidly as they proliferate. But the success of robotics is greatly dependent on getting the trust issue right early on, so now is the time to have this debate.

And, beyond the scope of the issue of trusting robots, it slides into the more general question of trusting AI, or trusting computers, to trusting technology, in general. This problem is starting to get recognized in AI, and there’s a movement of “Explainable AI” growing to address it. A key concept is transparency. If you understand how something works, you trust it more. In a Harry Potter book [Rowling 98], Mr. Weasley warns, “Never trust anything that thinks for itself unless you can see where it keeps its brain”. Where does AI keep its brain? Especially with current machine learning methods, which aggregate large numbers of small bits of evidence, there might even be no simple (to humans) explanation of their behavior. But this TiiS issue also illuminates other paths: assessing risk, experience in collaboration, and psychological and cultural dimensions of trust.

The Wagner et al. article sets the stage by giving us a framework for thinking about the trust tradeoff in terms of risk management. That’s a useful framework, because it gets us out of the sharply binary trust-or-not dilemma, which may not have a universal answer. They use some of the vocabulary of game theory as a representation, though they’re quick to put distance between themselves and traditional game theory analysis. Perhaps a little bit too quick, I think, because the game theory notions of the Prisoner’s Dilemma and the Ultimatum Game do hold many lessons for the robot trust issue. The choice between “cooperation” and “defection” in the Prisoner’s Dilemma corresponds to the question of whether or not to trust a partner about whom one does not have certainty.

Several of the articles try to shed light on the robot trust issue by psychology experiments that ask a user whether to trust a robot in various situations. Akash et al. try to find physical correlates of trust, that is, physiological signals that correlate with human judgments of trust. They talk about “sensing human trust”, but of course this is really about signals that correlate with a person’s feelings of trust at a given moment. It may only have an indirect relation to long-term or situational judgments of trust. Still, it’s an important area that had not be previously studied.

Holbrook tracks whether “priming” people with violent images decreases trust in robots, and it does. Violent images in the news, in video games, and elsewhere can have an effect on our default mental model of behavior, creating paranoia and distrust. I only wish they had also looked at the positive side, too. Anything we can do to increase positivity in our mental environment might be conducive to trust, and the benefits it confers. One reason to treat robots with kindness and compassion, even if they can’t “appreciate” it, is that the mental tendencies might “spill over” into our interactions with people.

Culture is also an important factor. Chien et al. establish that the views of trust held by different national cultures affect peoples’ willingness to trust robots. It is said that the Japanese are less frightened, and more welcoming, of robots than Americans. That’s consistent with what I have observed in trips to Japan, and interaction with Japanese people. Some say it may be because the traditional Japanese Tao religion ascribes souls to inanimate objects. Japan doesn’t figure into this article, but they examine cultural views about face, honor, and dignity, in the US, Taiwan, and Turkey.

A lot of modern work in robotics has moved on from simply trying to reproduce human behavior, to having humans and robots work together on a task. Wang et al. explores such a mixed-initiative interaction, in robot motion planning. Co-workers have to have considerable trust in each other. But it’s also useful in collaborative situations to have less than 100% perfect trust, so that co-workers can check each other’s work for mistakes, prevent “groupthink”, or get diversity of viewpoints.

Finally, since nobody, human or robot, is perfect, trust will occasionally go awry, despite the best of intentions. Given that realization, it’s also prudent to have strategies for repairing trust when it breaks down. In the long term, it’s better to have good strategies for repairing trust that insisting on perfection. Baker et al explore this important issue. To err is robotic, to forgive, divine.

As for all computer technology, trust in a robot is a proxy for trust in the people who created it. If the programmers, or organizations that produce the robot, are acting in the interests of users of that robot, then the users will be justified in trusting it. Danger will happen when the robot becomes a vehicle for companies or governments to wrest money or power from the users by exploiting their willingness to trust the robot. It’s especially important that early applications of robots bend over backwards to earn, and to deserve, that trust. Otherwise, robotics will get a bad reputation that will be hard to shake. It’s up to us to ensure that robots cooperate with people, and don’t compete with them.

And maybe in that, there’s an opportunity that might not appear obvious at first. If we’re thoughtful, careful, and empathetic, we might be able to use what we learn from establishing trust between people and robots, to shed light on the age-old problem of how to establish trust between people. And wouldn’t that be great? Trust me on this one.

--

--

ACM TiiS
ACM TiiS

ACM Transactions on Interactive Intelligent Systems (TiiS), a journal dedicated to publishing original research combining AI and human-computer interaction.