We need well designed methods for progress in both Academia and industry. For example, the recent Deep Learning trend started in Academia as an approach to improve Machine Learning performance in well known tasks. After years of Academic validation, Deep Learning graduated to a systems problem and is widely adopted and developed in industry. Even after that stage, there is still huge efforts to make Deep Learning research and development even more democratic and accessible.
On the other hand, Machine Learning as used for self-driving cars is a piece in an incredibly complex system with many moving parts. We propose breaking that problem into pieces to make it easier to measure progress.
A simplified version of the Scientific Method can be presented in 6 steps:
- Define a question
- Gather information
- Form hypothesis
- Test hypothesis with reproducible experiments.
- Analyze results
Current self-driving car research, when compared to the general scientific method, requires further clarity. First, the question made in Step 1 “how to autonomously drive a car from A to B” is too broad and Academic unfriendly. Equipping a vehicle with the necessary hardware components, enabling algorithmic sensing and control, is difficult and cost prohibitive. Cost is also a major factor in information gathering (Step 2).
Most critical is the lack of well defined standards for Step 4. Without such standards progress is difficult to measure, generates confusion, and causes over defensive laws.
The contribution we make here is a set of improved driving tasks. The tasks do not focus explicitly on image segmentation, odometry estimation, road sign classification, etc, as they are already well maintained by the Robotics, Machine Learning, and Computer Vision communities. See for example, the Kitti and Cityscapes datasets.
Towards this goal we discuss a standardized driver’s school for self-driving cars. Both in real and simulated roads, and breaking down the problem of self-driving in several smaller tasks. The specific implementations of such driver’s school can follow the approach developed by OpenAI in their Gym and Roboschool environments. In OpenAI’s approach simulated environments and public performance leaderboards are used to benchmark the performance of controls algorithms, mostly for Reinforcement Learning research.
Real world environments can be track circuits, such as the annual selfracingcars.com, and public streets or road segments where the same number of miles are driven in different days and hours of the day. Standard software should be required to measure performance and ensure public credibility.
Below, we list smaller driving tasks that can be used to benchmark self-driving car improvement. We believe those tasks are straightforward enough to be tackled in both Academic and business settings.
Lane keeping involves getting the vehicle centered within a selected lane. The problem can be broken down into lane perception and path prediction. Which can be easily tackled with physics models of cars and large scale lane annotated datasets.
The problem of lane keeping should also involve comfort measures to avoid wobbling between lanes and safety to perform curves at proper speed.
Other lane maneuvers are lane changing, overtaking, exit taking, etc.
Road event detection and prediction
Detecting and predicting events on the road can be treated as classification problems. Here we need a large scale and public dataset of driving footage where the relevant events are labeled. Many relevant events exist but for the sake of brevity we list the most interesting:
- Lane departure
- Collision prediction
- Cut-in and cut-out prediction
- Pedestrian action recognition
Essentially, a driving version of the ImageNet dataset and the competition around it are needed.
Massively Multiplayer and Multimanufacturer Online Simulator (MMMOS)
We shouldn’t expect that all self-driving cars on the roads run the exactly same software. But we should require that they can operate well when driving with humans and different self-driving algorithms.
A desired solution for this problem should be a separate party that maintains an online environment where both open ended worlds (such as World of Warcraft and GTA V) and task specific worlds (ex: GTA V taxi missions or League of Legends) are inhabited by human players and AI from different manufacturers.
Measuring progress within any field is diffcult but can be made easier through discussion of well defined tasks. We hope that our proposal, which we make no claims at being extensive, provides a good starting point towards measurable self-driving research progress. Furthermore, we believe the discussed tasks can bring value to both Academic and Industrial settings. The value provided here is two fold: progress between industry and acaedmia is shared, measured, and open for discussion with the public.
Thanks to Norm for carefully reviewing and improving the draft.