Google researchers teach a robot how to walk
The team at Google Robotics used an improved reinforcement learning technique tested a year ago
It takes an average human about 12 months to learn to walk. The same is an equally arduous task for robots. But thanks to a new technique employed by researchers at Google Robotics, the concept of autonomous learning robots might be closer to reality. The method involves building on the previous research done a year ago where the team figured out how to get a robot to learn in the real world.
Employing reinforcement learning, a type of machine learning that borrows from concepts used in psychology and avoids unnecessary human intervention, which is a hallmark of existing reinforcement learning algorithms. The trial and error method requires human assistance every time it falls down or walks out of its training environment.
The new study aims to address this shortcoming. Researchers made innovation which allows the robot to navigate without any external help. A four-legged robot is able to navigate forward, backward and sideways employing tweaked state-of-the-art algorithms.
“I think this work is quite exciting. Removing the person from the process is really hard. By allowing robots to learn more autonomously, robots are closer to being able to learn in the real world that we live in, rather than in a lab.”
~ Chelsea Finn, an assistant professor at Stanford
This efficient algorithm could learn with fewer trials causing fewer errors. Modeling challenges were avoided by training the robot to walk in a real-world environment. It took the robot barely a couple of hours to start walking. Also, the real-world environment provided a natural variation of the terrain giving the robot a chance to adapt to similar environments that it may encounter later — inclines, steps, and flat terrain with obstacles.
Researchers utilized multiple techniques to train the robot — first, the robot was bound to the terrain it was exploring while training it on multiple maneuvers. If it reached the edge of the bounded area walking forward, it would reverse direction to walk backward. Secondly, the trial movements were constrained to minimize damage from repeated falling. If it did fall anyway, the team added another hard-coded algorithm enabling it to stand back up.
These improvements in the reinforcement learning algorithm enabled the robot to walk autonomously across several different surfaces in the test runs — flat ground, a memory foam mattress, and a doormat with crevices. The research would eventually be useful for future applications of the technology requiring robots to navigate through unknown terrain without any help.
Chelsea Finn, an assistant professor at Stanford who is also affiliated with the search engine giant, although excited with the new research, also critiques the setup since it requires a motion capture system above the robot to determine its location, something which is not possible in the real world.
The team expects to adapt the algorithm on different or multiple robots enabling them to learn at the same time in the same environment. Complete results of the research were published in arXiv.