Don’t Let Your Kids Give In to Robot Peer Pressure

Softbank’s Nao Robot. (Source: University of Plymouth)

Peer pressure can make us do all sorts of questionable things, from buying cars we can’t afford to getting downright violent, and kids are especially susceptible. That’s why a new study showing that robots can exert peer pressure on children is so unnerving.

In the experiment, described in the August 18 issue of Science Robotics, a team of researchers from Germany and the UK showed that children aged 7–9 were far more likely than adults to go against their better judgement and give the wrong answers on a test when robots sitting with them did so first. This has implications for how we integrate robots into classrooms, use them in child care, therapy, health care settings, and as toys.

Before we freak out about robot pied pipers leading our children towards destruction, let’s look at the science: 60 adults and 43 children were given a visual test in which they were shown a straight line and then asked which of three other lines looked to be the same length as the original. Both groups got the same robot peers, Softbank’s 58cm tall Nao humanoid robot, with the decidedly non-human names Snap, Crackle, and Pop. The robots at the table could swivel their heads to look at the computer screen showing the lines, as well as the researcher seated at the table taking notes, the two other robots, and subject seated at the table. The robots were programmed to make eye contact with the subjects in a way that could potentially pressure them into conforming with the robot’s answers, so the goal was to pressure children and adults and see if it, indeed, worked.

A subject takes the vision test with Snap, Crackle, and Pop. (Source: University of Plymouth)

The adults were tested in the presence of both human and robot “peers,” and while they sometimes gave the wrong answers after human peers did the same, their answers were not influenced by the robot “peer” group. The children, who were told they were taking an eye test, were only tested with robots, but were far more likely to give the incorrect answer if a robot did so first. Later they were told that the robot was trying to trick them. So at least there are 43 children now that know robots can be tricky. Let’s hope they spread the word.

So, children can be peer-pressured into giving incorrect answers on an eye test by humanoid robots. Any other assumptions we make based on this research are just that — assumptions, extrapolations from the data at hand. And that’s how science works. We see the headlines, but rarely the research.

There’s no information given on what the subjects thought was at stake in the test, but it’s interesting to think about whether or not the children might have given into robot pressure even if it meant being labeled with a vision problem that might need follow up. It is, of course, difficult and unethical to manipulate children in high stakes situations for the sake of research, so we’ll have to see how many more experiments can be ethically conducted to back up this initial research. Until we get enough experiments showing us otherwise, we should assume that robots can influence children and should mobilize to take precautions when developing interactive humanoid robots.

So what does it mean to have this information and do something about it? So often, articles on tech ethics leave us with the message that we should be doing more, but what does that look like in this case? Here are some ways to break it down:

- Talk to kids about what peer pressure is and how it can be so strong that even a robot can make kids do or say the wrong thing. Be careful not to imply that robots themselves are bad, just that they can be programmed to make people behave a certain way.

- There is evidence that robots can do a lot of good for children under the right circumstances and that sometimes peer pressure can be positive and encourage kids to do the right thing. But more research is needed.

- Keep in mind that technology is always moving forward and that trying to eliminate interactive robots is not a viable solution. Instead, we should try to learn as much as we can about what robots interacting with children are programmed to do and maintain a watchful eye and good communication with both our children and the robots’ owners and programmers.

- If a child in your care will be interacting with a robot (or even a computer program designed for interaction), ask questions about the goals of the project, how the data will be used, how your child will be monitored while using the system, and ask up front for resources on how to deal with any problems that might arise.

- Understand that we are in uncharted territory and that we don’t have many long-term studies about behavioral issues that may develop if children socialize with robots. But remember that we’re always employing new technology in the classroom and this isn’t something to fear, but rather to keep an eye on and ask questions about.

- Ask what company developed the robot, what other financial ties they have to the school, how much it cost, and whether the robots are marketing any goods to users.

- Don’t let technology freak you out. Robots aren’t replacing teachers any time soon, they’re merely a tool to augment a student’s education.

- There are plenty of interactive robots for sale. If you choose to buy one, make sure you look into whether or not it can be easily hacked, make sure to set parental controls (just like you would for social media), and check in with your child to see how they’re using it.

- Some kids benefit from forming an emotional bond with these machines, but at the end of the day it’s up to guardians to do a little research and a gut check to make sure they feel comfortable with how their children are responding.

We’ve long known humans can be influenced by robots and that people attribute human qualities to humanoid robots — we can even be racist and sexist in the presence of brown or female humanoid robots. People form bonds with robots and feel bad for them when they are mistreated (even though they feel no pain). Take a look at this video from Boston Dynamics of an engineer hitting a robot with a hockey stick and try not to feel something (the poking begins at 1:27, but it’s worth marveling at the whole video). If we’re going to continue to incorporate robots into our homes, offices, workplaces, and schools, it’s critical to talk about the effects they have on us.