On AI Anthropomorphism: Commentary by Pattie Maes

by Pattie Maes (MIT Media Lab, US)

Chenhao Tan
Human-Centered AI
6 min readApr 10, 2023

--

Editor’s note: This article is a commentary on “On AI Anthropomorphism,” written by Ben Shneiderman (University of Maryland, US) and Michael Muller (IBM Research, US). We have reproduced the commentary in its original form, sent as an email to Ben Shneiderman, and added a references section at the end.

I agree with Ben that we need to tread very conservatively with anthropomorphism of systems and, for the record, if you look at our historical debates, I have always held that position! I believe in the potential of smart systems that aid users by making suggestions, recommendations, and, in certain cases, even performing tasks on their behalf. However, that does not mean that those systems need to be personified, and I believe it is better not to do so in the majority of cases, so as to not mislead users. As an example of smart systems that are highly successful, recommendation systems use AI to support users, but they are not typically personified.

I believe that by default, we should err on the side of NOT personifying, because it only worsens users’ natural tendency to see computer systems as being intelligent in the same way as people, even when they are not. I believe the approach taken by Bing Chat, for example, and some other chat systems to emphasize personification by having the AI system use I/me pronouns, refer to its own desires and feelings, and using affect emojis is totally wrong-headed. I do not buy Muller’s argument which essentially comes down to “people naturally personify computer systems, animals, etc, so therefore it is fine to play into this.” I also do not buy his second argument that it is justified because we need a metaphor we can use to help people relate to AI systems, and this is the only applicable one. As Ben mentioned, it is entirely possible for a conversational system not to use first person pronouns and other elements of personification.

My reasons for pushing for a more conservative approach to personification is that first of all, I am concerned about people developing wrong models of these AI systems, assigning them deep understanding, judgment, intention and goals and other uniquely human aspects that they actually do not possess, and interpreting the AI’s responses from those beliefs. Second, I am very concerned about the emotional effects that personified chat agents have on users, as users naturally start developing feelings for smart systems, especially when anthropomorphism is maximized. Witness for example the recent uproar when the company Replika AI changed the behavior of their chat systems, with people reporting depression, outrage, etc, because they felt their (AI) friend was suddenly behaving less romantically toward them.

I am not going as far as to say we should ban all personification of AI systems. Instead, I believe we have to understand its effects on people more deeply and we have to think more carefully about the use cases where it may be justified and positively impact desired outcomes. For example, when agents are used to help people develop personal skills, e.g. conversation skills like turn-taking, empathy, facial expressions, eye gaze, etc, it may be justified. Increasingly, whether we like it or not, people are talking to agents for friendships and companionship or just as a release of their feelings and frustrations, in a similar way to a diary. My colleague Prof. Cynthia Breazeal showed through a survey that many people prefer to talk to an AI system about very personal issues, rather than a human, because the AI system will not judge them, while at the same time giving them human-like comfort. Discussions of the first ever, extremely simple chatbot “Eliza”, introduced in 1966, reported similar sentiments of users: sometimes having a conversation with an AI about personal problems and issues can bring relief and insight.

I do agree with Michael that we need to study the effects of personification more deeply. It is one thing to have a wise elder like Ben tell people that there are better design possibilities, or to have the 2 or 3 of us debate these issues, but it is even better if we can show through quantitative and qualitative user studies what the actual effects are. For example, we just published an article at FIE 2022 showing that personification by people a person relates to makes a difference in outcomes in the context of teaching systems (Pataranutaporn et al. 2022).

We also presented an extended abstract at the International Conference on Computational Social Science (IC2S2) 2022 (Danry et al., 2022) that shows that people too readily accept AI advice, especially when the AI gives a believable explanation with the consequence that people decrease their own accuracy discerning true from false statements when assisted by a malicious AI that gives them bogus but believable explanations (misleading them).

This response to conversational AI advice is reminiscent of psychologist Prof Ellen Langer’s studies on people accepting unfair behavior from other people as long as those people gave a believable sounding excuse (even if that excuse was bogus, e.g., people accepted a person cutting into a long line to make a photocopy when they simply said “my apologies, but I really have to make a photocopy”). The more human-like and believable an AI’s explanations are, the less likely people are to actually think deeply about the AI’s recommendation and its explanation. AI experts today are so focused on finding ways for AI to provide explanations, but the risk is that people too readily adopt the AI’s advice when given an explanation, rather than thinking for themselves. I have not read Natale, whom Ben mentions, but he may present a similar argument. All of this relates to Tversky and Kanneman’s notion of biases in human behavior, which can now (purposefully or unknowingly) be exploited by conversational systems. In a full paper to be presented at CHI ’23 (Danry et al. 2022), we argue that one way to prevent the user relying too much on an AI’s advice is to have the AI system engage the user in thinking and talking about the problem at hand, before simply and mindlessly accepting the AI’s recommendation.

With respect to Ben’s quote “By elevating machines to human capabilities, we diminish the specialness of people”, I would say that indeed, machines are not people, and should not be presented as such. However, I disagree with Ben’s choice of the words “elevate” and “specialness.” I sadly believe that it is already the case that, while not intelligent in the same way as people, AI systems are surpassing us in performance in many domains. I disagree with Michael and Ben on where I would put AI systems. They are not in between us and dogs in terms of intelligence. They are an entirely different type of “smarts” that is already superior in performance to people in many of the most human tasks. Whether they are intelligent or not in the same way as people is not the important question. I personally do not understand why everyone is so focused on that. What is clear is that they are surpassing us in many tasks that require people to think hard, have a lot of experience, knowledge, skills, etc. Maybe this is a topic for another debate, but I believe people will soon no longer be in a dominant position: AI is already getting better than us at hard tasks like influencing and manipulating people, or discovering new scientific insights, even though they cannot explain their methods and discoveries in human-understandable terms. It is one reason why I signed the open letter demanding a pause on new LLMs and large scale deployments. I am concerned.

References

  • Danry, V., Pataranutaporn, P., Epstein, Z., Groh, M., & Maes, P. (2022). Deceptive AI systems that give explanations are just as convincing as honest AI systems in human-machine decision making. Extended Abstract. Presented at the International Conference on Computational Social Science (IC2S2) 2022.
  • Pataranutaporn, P., Leong, J., Danry, V., Lawson, A. P., Maes, P., & Sra, M. (2022, October). AI-Generated Virtual Instructors Based on Liked or Admired People Can Improve Motivation and Foster Positive Emotions for Learning. In 2022 IEEE Frontiers in Education Conference (FIE) (pp. 1–9). IEEE.

--

--

Chenhao Tan
Human-Centered AI

Assistant Professor @UChicago, previously @CUBoulder postdoc @UW, PhD @Cornell, study human-centered AI, NLP, and computational social science.