How (not) to make Self Driving Car with just Lidar — Seoul Robotics intro

Han Bin Lee
Seoul Robotics
Published in
5 min readJun 19, 2019

“90 percent of what a self-driving car must master is perception… Driving itself is simple compared with understanding the world,’’

-Sebastian Thrun, founder of Udacity, the godfather of Stanley

Detection, Classification, Tracking, Predicting with only Lidar- Any Lidar.

Introduction

We are Seoul Robotics, a startup in Korea with a focus on 3D perception. We take the statement from Sebastian Thrun to our bone — we believe that the last pieces of the puzzle for the true autonomous era still lies in the perception.

In this story, we will share Seoul Robotics’ take at the autonomous space and how(not) to build self-driving cars with just Lidar sensors.

Vision & Mission

Our vision is “Making Robots Intelligent,”
And our current mission is to help robots understand 3D data with AI.

The Need

Seoul Robotics was founded by 4 experts in 3D data processing in 2017, after a Silicon Valley self-driving car competition. We realized that there is much work to be done in the world of 3D perception. 2D computer vision had made impressive progress thanks to the recent development of AI, but when it came to understanding 3D data, the AI world hadn’t even scratched the surface. In 2017 there were zero standalone lidar object detection software provider — which meant that everyone was reinventing the wheel from the scratch (if they are well funded), but most likely, companies and schools were using opensource deep learning models for their half-baked lidar perception software.

What we do

So that’s what we set out to do — helping cars to understand 3D data better and faster. Our bread and butter is deep learning based object detection, classification, tracking, and prediction. We also mix good ol’ if-then statements to ensure reliability. We have focused on understanding dynamic objects, and now we are moving toward understanding static environment with only Lidars.

Lidar Vision Software 3000

Differentiation

The biggest differentiation of Seoul Robotics is that we are trying to solve the autonomous problem without HD map — with just Lidar, any Lidar. We don’t claim that you don’t need a camera or radar. We also don’t claim that you don’t need HD map. The redundancy is key to solving the true autonomous system. And we want to create reliable and robust lidar perception that has its own layers of safety systems that act as the seatbelt against the dominant way of using Lidar. There are times when the maps and localization function fails, but if a robust and reliable map agnostic AI can still understand the environment features — such as big white trucks(that sometimes looks like skies), dividers, sidewalk, lanes, etc. — until the machine figures out where he is, this will bring us one step closer to a true autonomous era.

Many startups have abandoned this approach because processing lidar data that is not part of the HD map is simply easier. Trying to understand all road features with just lidar without the map is very challenging, but isn’t that the often the best part? It could potentially open the door to LV4+ cars no longer restricted to the geo-fenced area, and much more intelligent and safer LV2–3 vehicles. We have partnered up with the best lidar companies, and they have made impressive progress in recent years. Now some Lidar sensors are producing photo like 3D data, enabling us to be more creative with it, such as lane localization with just lidar data.

Lane Localization using Lidar (Hesai Lidar)

Monetization (why just Lidar)

3D perception space has garnered much attention, as people realize that this is one of the crucibles for solving LV5. More companies are hosting competitions, releasing more data (Uber, we are still waiting!) to water the researchers to create better 3D perception AI. Here is the recent trend from CVPR for past few years.

CVPR 3D concentration

So people (a.k.a few VCs…) wonder if we are going to make enough money, now in such a competitive market space, with just a small part of perception… we’ve been asked to consider an “upgrade” to full sensor-fusion. But from what we learned from the automotive and cell-phone market is an extreme segmentation, and usually, the best module or part provider rules. Just like Mobileye, who is still ruling the CV ADAS market at an astonishing 60 percent even after the market was flooded with Deep Learning competitors. Our belief is that 3D perception, if cooked right, requires even more in-depth algorithms. So that’s simply what we are going to do — providing the most advanced and reliable 3D Lidar perception module to the market and let it decide the winner. And when we say the market, it is not just automotive space — smart city, smart factory, robotics, farming, grocery stores…wherever 3D sensor goes, we go.

This saying has been brutally abused by other lidar companies, but I will have to say it again — we are ‘laser’ focused on 3D(Lidar) perception.

Closing

Most would agree that autonomous systems got to 90 percent, but still, have 90 percent to go. People are still trying to figure out the last pieces of puzzles, and no one seems to really have the right answer yet. Some companies are fully trying to solve it in their own, some trying to make it open source, some by massive collaborations… Seoul Robotics will contribute to the LV5 by helping cars to fully understand Lidar sensor data with AI(without map dependencies!).

Much to Elon’s dismay, we are afraid that Lidar sales volume and price trends indicate that they are here to stay for good. Anyhow, if you like to do things that didn’t even know possible with 3D sensors, please join our venture at Seoul, Gangnam. Yes, that Gangnam Style Gangnam.

Gangnam Style Autonomous Perception Team

Cheers,

Han Bin

Captain of Seoul Robotics

--

--