The #paperoftheweek 13 is “GS3D: An Efficient 3D Object Detection Framework for Autonomous Driving”

In this paper, the authors examine a novel approach to detecting objects in three dimensions without using multiple cameras or expensive imaging equipment. Traditionally, LIDAR detectors or multiple camera angles are required to produce accuracy sufficient for autonomous driving situations; however, these approaches can be slow, and push vehicles out of the price range of many consumers. By combining 2D object detection with orientation prediction, the new system, GS3D, can use simple trigonometry to provide “guideline” 3D bounding cube estimates. These estimates are then refined using features extracted from each face of the bounding cube to produce the final detection. In experimental trials, this method outperformed, in both speed and quality, all state-of-the-art single-frame 3D detection systems, including some which require additional information like stereo images or segmentation maps. This advance is quite promising for the future of camera-based autonomous driving solutions.

Abstract:

“We present an efficient 3D object detection framework based on a single RGB image in the scenario of autonomous driving. Our efforts are put on extracting the underlying 3D information in a 2D image and determining the accurate 3D bounding box of the object without point cloud or stereo data. Leveraging the off-the-shelf 2D object detector, we propose an artful approach to efficiently obtain a coarse cuboid for each predicted 2D box. The coarse cuboid has enough accuracy to guide us to determine the 3D box of the object by refinement. In contrast to previous state-of-the-art methods that only use the features extracted from the 2D bounding box for box refinement, we explore the 3D structure information of the object by employing the visual features of visible surfaces. The new features from surfaces are utilized to eliminate the problem of representation ambiguity brought by only using a 2D bounding box. Moreover, we investigate different methods of 3D box refinement and discover that a classification formulation with quality aware loss has much better performance than regression. Evaluated on the KITTI benchmark, our approach outperforms current state-of-the-art methods for single RGB image based 3D object detection.”

You can read the full article here.

About the author:

Jonathan Kleinfeld, Data and Software Engineering Intern at Brighter AI.