Profit Monk
Jul 25, 2017 · 4 min read

The self driving cars have caught public’s fancy in last few years. There are many projects piloted by industry that aim to develop a self-driving, robotic yet intelligent car that takes over everybody’s driving responsibility. There are several problems with the kind of moonshot projects that are being driven today in this space. First is about the approach to the project. The projects that get defined as moonshot or pioneering almost always fail miserably. They capture our attention for a short period in time and at the first sign of failure, all the money and focus moves on to the next moonshot. The reason this often happens is due to lack of a goal that is bigger and more important than the project itself. In absence of that, the first setback, whether it be based in technology, economics or regulations, leads to a quick and enervating death of the idea. The motivation for the project therefore needs to be rooted in such strong foundations, that it energizes the whole process at every setback again and again. That in summary is the second problem with the kind of moonshot projects being run today to create a self-driving car, a misplaced motivation. The research to solve the problem is directed towards building a vehicle that takes over completely from the driver. This in turn is supposed to improve either productivity or quality of life for human drivers further, depending upon who you believe. While that is technically possible although the path is filled with economic difficulties, a much more important goal for automotive industry and governments must be to reduce the number of fatalities in road accidents. This goal is achievable in near future if majority of the vehicles on the road are equipped with robust safety technologies. We don’t need self driving cars to save a million lives every year. We need affordable sensor fusion systems that provide safe transportation and stop fatal mistakes on the road.

Sensor fusion is a necessary technique for accurate environment perception. Making sense of the environment is important for automotive ADAS applications. Various sensors like radar and cameras can help detect risk of accidents more robustly in different weather, light and other physical environmental conditions. A sensor can collaborate (i.e. fuse) with sensors of the same type or different ones. Multiple radar, lidar (or camera) sensors can work together to perceive a wider field of view. On the other hand, a radar/camera or a lidar/camera sensor-combine can work together to perceive risks of collisions that are tough to identify by using only one of these modalities. See the picture that captures the essence of this argument.

How sensor fusion works to make commute safer

This brings us to the 3rd leg of our argument which is purely based in economics. The sales revenue of the 2 biggest automakers VW and Toyota in 2016 was in the range of 240–250 Billion US dollars. These companies sold about 10 Million cars each for the calendar year translating to an average car sale price of about 25K US dollars. Now with this little economic nugget stored at the back of your mind, consider the following. Most horrible accidents in a year kill more than a million people worldwide. The accidents and fatalities can only be reduced if majority of the cars sold in the market in near future are equipped with these technologies. The large volume installation for the safety fusion ADAS technology therefore has to be financially viable for an average priced car and needs to be affordable.

The key contributors of the cost for these systems are the digital ICs and sensors. The demand for such chips is quickly growing. Many semiconductor vendors in their quest to enter this market quickly are trying to retrofit the chips that were designed for other industries (mobile, PC, video gaming) into automotive space. That creates trade-offs the automotive industry can not and should not afford as the problem it needs to solve is to save lives by avoiding or mitigating most of these accidents using an affordable safety system. The chips that power safety sensor fusion algorithms therefore must be custom built to perform required computations. That is the only way to perform this maths reliably without breaking the bank on the system cost.

Typical frame rate for the ADAS safety sensors is 30 to 60 frames every second. This translates into completing one full cycle of computation in less than 15 mS for every sensor. Algorithms such as computing fast Fourier Transforms (FFT), video and image processing, inferring deep learning neural networks, matrix inversions and multiplications, kalman and particle filters, sorting and searching through a jumble of data and object clustering hold the keys for the multi-keyed lock of sensor fusion. In a series of bi-weekly blogs, I intend to talk about what those chips should look like, discuss and debate some of the mystifying algorithms that drive sensor fusion and the clever tricks that engineers and digital designers need to use to make these computations faster and cheaper. Remember, the success of that effort will determine if the commute becomes safer just for the elite few or for everybody who drives.

Profit Monk

Written by

Of statistical accidents and conveniences

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade