Fei-Fei Li’s Stanford Team Is Crowdsourcing Robot Training

Synced
SyncedReview
Published in
4 min readNov 2, 2018

Sorting a bunch of differently coloured toy trucks and action figures seems like child’s play, right? Unfortunately this remains a challenging task in the world of machine learning. So why not have humans simply show the machines how to do it?

This is the inspiration behind a new research project led by Stanford Artificial Intelligence Lab Director Fei-Fei Li and her husband, Stanford Associate Professor Silvio Savarese. The project introduces two new global platforms — RoboTurk and Surreal — designed to provide high-quality task demonstration data to help researchers working in robotic manipulation.

RoboTurk is a crowdsourcing platform that is collecting human demonstrations of tasks such as “picking” and “assembly”; while Surreal is an open-source reinforcement learning framework that accelerates the machines’ learning process.

Research showed how humans can control the robot simulators

The “humans teaching robots” concept itself is not a new one. Recent advances in imitation learning have demonstrated the possibility for applications in robotic manipulation tasks. Last year OpenAI created a robotics system that can learn behaviors and actions from a single human demonstration in a virtual reality environment, then replicate that in the real world. Berkeley Artificial Intelligence Research (BAIR) meanwhile recently presented One-Shot Imitation from Watching Videos, a training process which enables robots to learn skills from a human example video, and integrate what it has learned with its previous understanding of the target objects.

OpenAI robotics system

Collecting relevant and high quality human demonstration data however remains a challenge. BAIR researchers noted that “imitation learning of vision-based skills usually requires a huge number of demonstrations of an expert performing a skill.” An earlier BAIR study on Deep Imitation Learning for Complex Manipulation Tasks from Virtual Reality Teleoperation found that effectively training a robot to reach for a single fixed object using raw pixel input could require up to 200 human demonstrations.

Visible Hand — RoboTurk

Li’s team created RoboTurk as a crowdsourcing platform in order to obtain high-quality human demonstrations. Users can easily access RoboTurk via smartphones or browsers and remotely control robot simulations in real time with prompt feedback. This novel accessibility feature was designed to expand the global population of users. During a 22-hour pilot testing period of the system, over 2,220 successful demonstrations were collected on two tasks: Bin Picking and Nut-and-Peg Assembly. Moreover the test also proved users could effectively control robot simulations at the Stanford Lab in California even from the other side of the planet.

Researcher Animesh Garg controls RoboTurk from atop the Swiss Alps

Invisible Hand — Surreal

The second important framework behind the project is Surreal: a scalable, open-source distributed reinforcement learning framework with reproducibility. To ensure Surreal provided continuous control, Li and her team used PPO (Proximal Policy Optimization) and DPG (Deterministic Policy Gradient) algorithms, which are highly scalable implementations of distributed reinforcement learning algorithms.

There are four distributed components in Surreal: Actors, Buffer, Learner, and Parameter Server. Actors are responsible for generating experiences, while the Buffer will store the experience. Learning meanwhile updates Parameters from Experience before the parameter server stores the parameters. The need for global synchronization is thus eliminated, while the decoupling of data generation and learning improves scalability.

More importantly, Surreal provides umbrella support for both on-policy and off-policy reinforcement learning algorithms. A four-layer computing infrastructure secures the easy application of RL experiments. Developers can deploy the Surreal system on any commercial cloud provider or on personal computers.

The “ImageNet” of Robotics?

The Stanford research has inspired some in the AI community to describe RoboTurk as an ImageNet for robotics. RoboTurk and Surreal both integrate high-quality datasets into advanced reinforcement learning. It’s hoped that in the future the platforms will be able to collect data on an extensive range of diverse tasks. Additionally, the extension of the platform to aid in remote teleoperation of real robot arms is a possibility. The research team also believes more complicated algorithms can be developed to leverage larger datasets for policy learning.

There is no doubt that RoboTurk and Surreal could emerge as an important integrated platform for reproducibility research.

Journalist: Fangyu Cai | Editor: Michael Sarazen

Follow us on Twitter @Synced_Global for daily AI news

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global