Curious Robots Will Teach Us About Ourselves

Machines programmed to explore could lead to smarter AI — and show us how to unlock capacities lying dormant in our own brains.

Illustration by Dominic Kesterton

Two robotic puppies sit on a baby play mat. One extends a paw and hits a dangling toy. It pauses and tries out its other limb a few times. After a few more experimental movements, it turns its head and lets out a cry like a plaintive jungle bird.

Nursery toys aside, these robotic pooches don’t seem to have much in common with human newborns. But in one important way, they are like people: They learn because they want to. And that is offering an unprecedented peek into what drives humanity’s fundamental inquisitiveness about the world.

Born fuzzy-eyed and barely in control of their limbs, human infants soon transform into mobile, chattering little creatures who play and experiment and invent. No one forces them to do this; given a basic level of stimulation and care, they just do. And neuroscientists have no idea how.

The robots might have the answer. Researchers hope that by creating algorithms that result in a robotic version of curiosity, they can get new ideas about how the human brain pulls off the same trick. An obvious goal is to build smarter, more flexible artificial intelligence, leading to machines that can extrapolate and learn without any outside prompting. But this work might also crack the code of brain processes that drive fundamental human behaviors: When and why are we curious? How do the interactions between our body and the environment drive our curiosity? And is there any way to manipulate curiosity, to make people more inquisitive or to make certain subjects pop?

“This is potentially a very transformative thing for the field of artificial intelligence, and even a little bit scary, because it raises the prospect of having machines that not only do something that you program them to do, but that can discover new patterns,” says Jacqueline Gottlieb, a neuroscientist at Columbia University’s Kavli Institute for Brain Science. Gottlieb is collaborating with the developer of the puppy robots, Pierre-Yves Oudeyer of the French Institute for Research in Computer Science and Automation, on a project that will take hypotheses about curiosity generated in robotics experiments and test them in humans.


When robotics and cognitive science meet, it is often the latter informing the former: Researchers in artificial intelligence frequently look to human abilities and cognitive processes to try to mimic them in artificial systems. Oudeyer and his colleagues are interested in building robots that learn more like people do, too. But the main push of their Neurocuriosity Project, which launched in 2013 and was just funded for its second phase by an international project called the Human Frontier Science Program, is to turn that robotics-neuroscience relationship on its head. “The goal is to help us understand better what’s happening in the human brain,” Oudeyer says.

Robotics are useful for this work because, unlike computer simulations, they have bodies. They perceive. They interact. Perhaps most importantly, they can be programmed to develop over time. And that process may be indispensable to intelligence. “We don’t have an example of an intelligent system [in nature] that did not go through a developmental period,” says Alexander Stoytchev, an associate professor of electrical and computer engineering at Iowa State University who isn’t involved in Gottlieb and Oudeyer’s work.

Robots explore a baby mat in an experiment led by Pierre-Yves Oudeyer.

A paper published in 2016 in the Proceedings of the National Academy of Sciences even posits that intelligence increases with the time it takes an animal to reach maturity: Humans take longer to wean their infants than baboons, which wean later than lemurs, for example. Chimpanzee babies walk earlier than human babies, which means human babies have several more months in the sitting-and-playing stage, mastering things like hand-eye coordination and cause-and-effect.

What piqued the interest of Oudeyer and his colleagues was the fact that human infants develop in idiosyncratic ways. All babies tend to flail around, then roll over, then crawl, then walk. But some walk at 10 months of age and some walk at 14 months. Some crawl on all fours and others scoot around on their bottoms. Whatever system the human brain uses, it’s both organized and open-ended.

It’s also driven by intrinsic curiosity, in very mysterious ways. Typically, Gottlieb says, people who research decision-making frame it as a process of weighing the options and picking the one that seems most valuable. But that doesn’t quite explain basic curiosity, because babies don’t know the value of their choices until long after they make them. “How can we make choices without knowing what the hell is going to happen?” Gottlieb says.

To explore that idea, the researchers programmed their puppy robots with very little information to start out. The bots could produce simple limb and head movements known as motor primitives. They could imitate the moves or sounds of a fellow robot. And they could keep track of what their movements did to their bodies and the surrounding environment. But that was about it.

The results, first published in 2006 and 2007, were remarkably like human infant development. The robots made movements that seemed arbitrary at first, but they soon showed increasing purposefulness. Like human infants, they tended to have a general path of development, from simple tasks to more complex activities (grasping toys, then batting them around, then focusing on vocalizing, another “motor primitive” the researchers installed to study the development of language).

“This is potentially a very transformative thing for the field of artificial intelligence, and even a little bit scary.”

Crucially, the researchers found, this human-like development emerged when the robots were programmed to be motivated by their own progress: The more information the robots were likely to gain by trying out a new motion or vocalization, the more likely they were to give it a shot. This led them to move on from trying overly simple motions (no new information to be gleaned there) as well as overly complex ones (there’s no point in trying to waltz if you can’t even crawl). In other words, the intrinsic reward for learning — for having curiosity, basically — is the jolt that comes from unlocking just the right amount of information about the world with your actions.

There is evidence that humans, too, display a sort of “just-right” Goldilocks preference for stimuli. A 2012 paper in the journal PLoS ONE, for example, found that 7- and 8-month-olds preferred to look at stimuli that were of midrange complexity.

However, it’s plausible that a mechanism that mimics human development in robots actually has nothing to do with how humans really learn. That’s why Gottlieb and her team are now trying to test in humans whether curiosity is indeed fueled by making progress at learning. The researchers are designing games of varying difficulty that people will explore on their own. They’ll then test the extent to which people’s choices in the games are dependent on their learning progress.

It’s not easy to test something as nebulous as curiosity in the lab, but the researchers are taking their own baby steps, driven forward by the lure of understanding just a little bit more. “It hasn’t been easy. We’ve been at it for a while,” Gottlieb says. “I think we’re learning from our mistakes, and we’re refining, and we’re getting somewhere.”

By exploring various movements, this robotic dog eventually learns how to crawl forward.