OpenAI takes the robotic imitation of human behavior into a whole new level

OpenAI, an Elon Musk-backed nonprofit artificial intelligence platform just announced a new milestone in training robots. They are working with a new algorithm known as one-shot imitation learning, which lets human being train a robot by demonstrating it first in virtual reality.


In the video below, a person is trying to teach a robotic arm how to stack a series of colored cube-shaped blocks by first performing it manually within a VR environment. The whole system is powered by two neural networks. The first one determines the object’s spatial position to the robot by taking a camera image. However, the neural network was trained only with a host of simulated images, which means it knew how to cooperate with the real world before it ever actually met it. The second one emulates any task the demonstrator shows it by scanning through recorded actions and observing frames telling it what to do next.

“Our robot has now learned to perform the task even though its movements have to be different than the ones in the demonstration,” explains Josh Tobin, a member of OpenAI’s technical staff. “With a single demonstration of a task, we can replicate it in a number of different initial conditions. Teaching the robot how to build a different block arrangement requires only a single additional demonstration.”

The model is currently a prototype, but this concept could help researchers in the long run. They could use this concept to teach the robots more complex tasks in future without using any physical elements at all. OpenAI’s long term plan is to give the AI the ability to learn to adapt to unpredictable changes in the environment.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.