Hundreds of Computer Vision Researchers Gathered in Munich to Talk Self-Driving
In March 2018, we hosted our first event in Munich: a Computer Vision Meetup. Over two hundred computer vision researchers and engineers gathered to hear the latest from Lyft Level 5, Intel Labs, and TUM professor, Daniel Cremers.
Our Munich team is growing! To learn more and view openings, visit lyft.com/level5.
Here’s what you missed
Holger Rapp and Wolfgang Hess from our Munich office talked about SLAM (Simultaneous localization and mapping) for self-driving cars. SLAM provides highly detailed maps and accurate localization for autonomous cars, even in GPS-deprived surroundings like cities. They dove into the fast loop closing algorithm in their Cartographer paper and how it was generalized to 3D.
Matt Vitelli covered current state-of-the-art techniques for processing high-resolution camera imagery and LiDAR point clouds using a combination of classical computer vision techniques, deep learning models, and other heuristics. He then walked through a few experiments Level 5 has tackled, including training an end-to-end network for predicting steering angle from camera data, bounding box detection from high-resolution imagery, LiDAR point cloud segmentation, and fusing the data together.
Alexey Dosovitskiy from Intel Labs talked about how applications of deep learning in autonomous driving are complicated by both logistical and algorithmic difficulties. He introduced us to CARLA, a high-fidelity open urban driving simulator that intends to democratize research on autonomous driving.
Lastly, Daniel Cremers, Chair for Computer Vision & Artificial Intelligence at TUM, shared a number of recent developments in 3D computer vision—in particular, reconstruction from moving cameras. He shared methods for SLAM that can accurately localize the camera and recover the observed 3D world.
Big thanks to our partners at München Computer Vision & Medical Image Analysis meetup group.