Learn How End-to-End Deep Learning Steers a Self-Driving Car
As part of a complete software stack for autonomous driving, NVIDIA has created a deep-learning-based system, known as PilotNet, which learns to emulate the behavior of human drivers and can be deployed as a self-driving car controller. PilotNet is trained using road images paired with the steering angles generated by a human driving a data-collection car. It derives the necessary domain knowledge from data. This eliminates the need for human engineers to anticipate what is important in an image and to foresee all the necessary rules for safe driving. Road tests demonstrated that PilotNet can successfully perform lane keeping in a wide variety of driving conditions, regardless of whether lane markings are present or not.
In order to understand which objects determine PilotNet’s driving decisions we have built visualization tools that highlight the pixels that are most influential for PilotNet’s decisions. The visualization shows that PilotNet learns to recognize lane markings, road edges, and other vehicles even though PilotNet was never explicitly told to recognize these objects. Remarkably, PilotNet learns which road features are important for driving simply by observing human drivers’ actions in response to a view of the road.
In addition to learning the obvious features such as lane markings, PilotNet learns more subtle features that would be hard to anticipate and program by engineers, for example, bushes lining the edge of the road and atypical vehicle classes.
This blog post is based on the NVIDIA paper Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car. Please see the original paper for full details.
By Mariusz Bojarski, Larry Jacket, Ben Firner, and Urs Muller