The First and Only Dataset Containing the Full Autonomous Vehicle Sensor Suite

New Full Sensor Suite Large Scale Autonomous Driving Dataset

Christopher Dossman
AI³ | Theory, Practice, Business
2 min readApr 1, 2019

--

This research summary is just one of many that are distributed weekly on the AI scholar newsletter. To start receiving the weekly newsletter, sign up here.

Autonomous driving has been projected to have the capability to save numerous lives. Self-driving cars can not suffer distracted driving as much as human drivers which results in hundreds of car crashes.

But real-time object detection and tracking which are fundamental for state of the art implementation of autonomous vehicle technology are still far from perfect. They continue to increasingly rely on deep learning, which in turn is driving the need for standard image based benchmark datasets for high-end training and evaluation.

The challenge is that there’s a lack of large-scale multimodal datasets which are critical as no sensor is sufficient on its own and the sensor types require some harmonizing.

NuScenes: New Multimodal Dataset for Self Driving Cars

New research has presented a large-scale autonomous driving dataset that is the first to feature a full sensor suite including 5 radars, 1 lidar, 6 cameras, IMU and GPS. NuTonomy scenes(NuScenes) has 7x more and 100 x as many images as KITTI dataset covering 23 categories including different vehicles, pedestrians types, mobility devices, and other objects.

Researchers have also identified a new 3D approach to consolidate various object detection aspects and tasks including size, classification, orientation, localization, velocity, and attribute estimation. Dataset analysis and baseline performance for lidar and image-based detection methods demonstrate that while lidar-only or image-only detectors achieve promising detection results, lidar-only networks presently provide superior performance.

Potential Uses and Effects

NuScenes will enhance object detection by accelerating research and development of autonomous vehicle technology and can help to bring the technology closer to practicality. I have liked that researchers encourage further exploration on NuScenes to make it possible to utilize all sensor data as well as exploit semantic maps for even better performance because each sensor modality provides complementary features for training 3D object detection.

Additionally, the research has promoted the first nuScenes detection challenge which will be launched in April 2019. Challenge winners and results will be announced at the Workshop on Autonomous Driving.

Further info and code has been made available and can be accessed here.

Thanks for reading. Please comment, share and remember to subscribe! Also, follow me on Twitter and LinkedIn. Remember to 👏 if you enjoyed this article. Cheers!

--

--

AI³ | Theory, Practice, Business
AI³ | Theory, Practice, Business

Published in AI³ | Theory, Practice, Business

The AI revolution is here! Navigate the ever changing industry with our thoughtfully written articles whether your a researcher, engineer, or entrepreneur

Christopher Dossman
Christopher Dossman

Written by Christopher Dossman

Deep Learning Engineer, Teacher, and Entrepreneur