You, We and I, Robot: How Social Cognitive Theory Explains How Humans Learn from Machines

UF J-School
CJC Insights

--

In 1950, Alan Turing posed perhaps the most provocative question of the 20th century: Can machines think?

Turing’s now-famous “imitation game“ laid the groundwork for what we now know as the Turing Test, a benchmark for artificial intelligence. But Turing’s vision extended beyond mere mimicry. He grasped the potential for machines to not just imitate human behavior, but also to engage in complex, intelligent interaction.

More than half a century after Turing’s provocative question, Kun Xu, assistant professor in emerging media at the University of Florida’s College of Journalism and Communications, explores a new frontier in his recent study.

Xu’s research focuses on human-computer and human-robot interaction and his latest work investigates whether humans can learn from machines as AI becomes increasingly sophisticated.

To investigate this question, Xu designed an experiment in which 128 participants interacted with a humanoid robot that demonstrated recycling behaviors. The robot, which is just under two feet tall, is capable of vision recognition, speech recognition, and presenting humanlike behavior.

Xu programmed the robot to sort items into recycling bins, but they also varied the outcomes it presented. In some cases, the robot highlighted positive environmental impacts after its recycling behavior. In others, it emphasized potential negative consequences, like poor working conditions in recycling plants. And sometimes, the robot offered no commentary at all. Participants were not told whether the robot’s recycling behavior was appropriate.

The study also cast the robot in different social roles. In some interactions, the robot was presented as an expert “instructor” on recycling. In others, it was framed as a collaborative “fellow,” learning alongside participants.

The results shed a light on human preferences for human-robot interaction. Participants proved significantly more likely to imitate the robot’s choices when it demonstrated positive outcomes or no outcome information, compared to negative scenarios. In other words, the humans learned from the robot in much the same way we learn from fellow humans.

Moreover, participants were more apt to follow the robot’s lead when it was positioned as an authoritative instructor rather than a peer learner. This finding suggests the power of perceived expertise extends to both human and machine teachers.

Xu’s research breaks new ground in our understanding of human-machine interaction and underscores the increasingly complex relationship between human and artificial cognition. It implies that as AI evolves, robots won’t merely be passive recipients of programming, but active agents shaping human knowledge and behavior. As robots can be programmed to conduct sophisticated pro-social behaviors, such as environment protection, more diversified methods of learning could be beneficial to technology users.

The original paper, “A mini imitation game: How individuals model social robots via behavioral outcomes and social roles,” was published in Telematics and Informatics, Volume 78, March 2023.

Authors: Kun Xu.

This summary was written by Gigi Marino.

--

--

UF J-School
CJC Insights

News and insights from the College of Journalism and Communications at the University of Florida (@UF) .