The Charisma of a Robot

Sophie Burke
wpihci
Published in
4 min readApr 29, 2020

--

I still remember my best teachers and my worst teachers. My worst teacher was the law professor who simply paced in front of the chalkboard, spouting his knowledge of constitutional law, without making eye contact. I remember thinking: this is bizarre. He’s acting like we’re not even here.

I grew to resent my time in the classroom. I’d watch him pacing, pontificating, while stewing: if I am doing him the favor of being in his class, shouldn’t he at least acknowledge me?

I would have been better served by a robot.

But how much better served? Scientists from the University of Wisconsin attempted to answer that question.

Here’s how they did it. In 2012, the research team of Daniel Szafir and Bilge Mutlu paired 30 participants up with a cute yellow robot that attempted to teach them a Japanese story. The story was “My Lord Bag of Rice.”

Right away, this whole scenario reminds me of a Devo video from 1980 — anachronistic and sci-fi at the same time.

The robot used by University of Wisconsin education researchers

But wait — it gets nerdy.

Let’s take a look at how real human instructors work. What the law professor should have been doing is providing immediacy cues. What are immediacy cues? They are what speakers do to increase the perceived closeness between themselves and their audience. (Remember how I said it seemed like my law professor wasn’t even aware of us being in his classroom? He was psychologically far away.)

So what do humans do when they are cueing immediacy? They can speak louder, stand closer, make eye contact, tell a joke — there’s a bevy of actions human teachers can employ to increase student attention. And why does this work? Two theories: when students perceive closeness, they are aroused, which increases learning, and they are also more motivated, which also increases learning.

So Szafir and Mutlu (the University of Wisconsin scientists) had a couple of questions. If they used a robot teacher, how could they get that robot to understand whether their student was engaged? And let’s say they figured out how to program the robot to detect engagement — how then would they program the robot to chose a response?

The rest of this blog post details their attempt to make a robot a better teacher than my law professor.

Szafir and Mutlu decided on a way for their robot to receive data about their participants’ engagement levels. The team chose EEG, specifically a Neurosky Mindset device, as a way to send data from a participant to the robot. They used the formula for task engagement that divides the beta signal by the total of the alpha and theta signals.

So engagement will be measured by Epsilon, the output of this formula. So we’ve got an Epsilon level — now what? The team’s next goal was to find a level below which a participant could be determined to be losing attention.

Every 15 seconds, the signal would be pulled to detect the student’s attention levels and if they dropped, the robot would spring into action. Well, not exactly spring, because these robots don’t spring, but display an immediacy cue like a gesture or volume change.

The team called these thresholds. They developed a derivative threshold — below which you shall not drop! — and a least squares regression threshold — to suggest whether a precipitous drop was likely to occur in your future. And this was how the system monitored and responded to the user’s brain states.

I can’t even imagine how my law professor would have reacted if he received the engagement levels of the first year students in his class. He might have fallen on the floor in a clump of tweed. Then again, he might not have noticed.

So how was the system tested? Thirty participants were divided into three groups of ten and split evenly by gender. The researchers created three conditions, kind of like how Goldilocks had three bowls of porridge to choose from — which would be the most delicious?

Anyway, in the first group the robot was programmed to occasionally tilt its head and gaze at the participant while ignoring their engagement levels. This was called the low-immediacy condition. In another group, the participants were taught by a robot which was running the monitor/response program written by Szafir and Mutlu. This was called the adaptive response condition. The third condition featured a robot who — when teaching the participants — displayed immediacy cues at random times.

It turns out the most delicious bowl of porridge came from the adaptive response condition — where participants were taught by a robot that was trained to detect and respond to their EEG signals. The lowest scores on a post test were from the group who received minimal immediacy cues. At this point, I imagine Szafir and Mutlu sitting back and heaving a sigh of relief: they had done what they said they were going to do. They programmed a robot to display immediacy cues based on participants’ EEG data and these cues improved learning and facilitated arousal and motivation in comparison to other groups.

But did everybody react in the way way to the robot? No! It turns out that female participants felt more rapport with the adaptive instructor. Why? Should we go there? To that place where women are more touchy-feely and empathetic? Well, that conversation is beyond the scope of this paper.

All I know is if I had a cute yellow robot teaching me constitutional law all those many years ago, I might have stayed in law school.

Source: Szafir, D., & Mutlu, B. (2012, May). Pay attention! Designing adaptive agents that monitor and improve user engagement. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 11–20).

--

--