The Impact of Robot Group Presentation Strategies on Mental Model Formation
This post is a summary of “You Had Me at Hello: The Impact of Robot Group Presentation Strategies on Mental Model Formation” that was presented at HRI 2022 earlier this year and was coauthored by myself (Alexandra Bejarano), Samantha Reig, Priyanka Senapati, and Tom Williams. For more details check out the video presentation shown at HRI.
From a user’s perspective, interactions with individual robots typically consist of robots with humanlike mind-body-identity associations, where one mind controls one body and presents one identity. However, interactions with groups of robots are more complex: there may be more minds, bodies, and identities involved, and the mind-body-identity associations between them can be broken down and stray from humanlike configurations.
Specifically, a robot group may present itself in such a way through identity performance strategies to evoke certain user mental models/perceptions. The distinction between how identity is presented within different strategies is critical because the number of bodies and identities involved in a user’s mental model dictates where and how they place trust (see Tom’s 2021 HRI paper on Deconstructed Trustee Theory).
Thus, we sought to understand 1) how different presentation strategies used by robot groups might impact user mental models of robot groups and their constituent minds, bodies, and identities and 2) how those strategies and the mental models they evoked might impact entitativity (a key dimension of group perception that substantively mediates the quality of interaction).
In this work, we identified 5 key group identity observables (design cues) that may infer relationships between the minds, bodies, and identities of a group and to construct corresponding mental models: Speaking, Self-Reference, Other-Reference, Naming, and Name and Voice Distinctiveness. Then, through an online study, we explored how changes in those observables might impact the mental models people develop of robot groups during initial human-robot introductions.
Our findings demonstrated 1) how different observables lead observers to develop different mental models of the Intelligence Distributions and Social Relationships of robot groups and 2) how observable variations and the mental models they evoked, influenced entitativity.