Tradeoffs Between Effectiveness and Social Perception When Using Mixed Reality to Enhance Gesturally Limited Robot

Thao Phung
Mines Robotics
Published in
4 min readApr 27, 2021
Categories of mixed-reality gestures

This post summarizes our research paper “What’s The Point? Tradeoffs Between Effectiveness and Social Perception When Using Mixed Reality to Enhance Gesturally Limited Robots” by Jared Hamilton*, Thao Phung*, Nhan Tran, and Tom Williams. This work was published and presented at HRI 2021. *Equal Contribution

For robots to communicate effectively with humans, they must be capable of natural, human-like human-robot dialogue. And, in contrast to dialogue agents and chatbots, interactive robots must be able to communicate with sensitivity to situated context.

Deixis is a key component of human-human communication. Humans begin pointing while speaking even from infancy. Deictic gestures help us express our thoughts, especially in environments in which verbal communication would be difficult such as in noisy factory environments. And widespread evidence has been found in the human robot interaction (HRI) literature for the effectiveness of robots’ use of deictic gestures such as pointing.

However, many robotic platforms lack the arms, heads, and eyes needed to generate expressive cues or deictic gestures. This is especially true for mobile bases such as those used in warehouses, and free-flying drone platforms.

While these types of robots may not be designed to be sociable, they still need gaze and gestural-capabilities for situated communication. In recent work, HRI researchers have begun to explore how mixed reality gestures can be used to provide nonverbal capabilities to these sorts of robots.

In previous work, our lab has started to explore the space of deictic gestures that robots can use in mixed reality environments, including both physical gestures, MR gestures generated through Head Mounted Displays (HMDs) and MR gestures generated through projectors.

In our recent work we explored two different categories of MR deictic gestures for armless robots: a virtual arrow positioned over a target referent and a virtual arm positioned over the gesturing robot.

Virtual arrow pointing to holographic
Robot arm gesturing to sphere holographic spheres

The central idea of this work is that we would expect these two gestures to differ in important ways. First, we predicted that virtual arrows should simply be more task effective, because they directly and obviously pick out the robot’s target. There’s no need for the user to try to estimate where the robot is pointing.

But second, we predicted that virtual arms might have a competing set of benefits. Specifically, the use of the virtual arms gives the robot a more anthropomorphic morphology. And moreover, the virtual arms keep drawing the user’s eyes back to the robot to see where it’s pointing, whereas with virtual arrows the user doesn’t actually need to look at the robot at all, so we’d also expect the robot using virtual arms to have increased social presence.

To test these hypotheses, we ran an experiment with a 2x2 within-subjects design in which two independent variables were manipulated: Gesture Type and Referent Distance.

On the one hand, virtual arrows led to better accuracy and reaction time.

On the other hand, ego-sensitive gestures (virtual arms) led to greater perceived anthropomorphism, likability, social presence, warmth and competence.

In summary, our results suggest that gestures like virtual arrows (what we term “non-ego-sensitive allocentric gestures”) enable faster reaction time and higher accuracy, while gestures made with virtual arms (what we term “ego-sensitive allocentric gestures”) enable higher perceived social presence, anthropomorphism, and likability. This presents a clear design trade-off: our results suggest the need for different mixed reality gestures to be used in different application domains depending on the nature of the task and the intended relationship the designer seeks to establish between human and robot.

Full Paper: What’s The Point? Tradeoff between Effectiveness and Social Perception When Using Mixed Reality to Enhance Gesturally Limited Robots. Jared Hamilton*, Thao Phung*, Nhan Tran, and Tom Williams. Human Robot Interaction (HRI), 2021

--

--