The Path to True Full Self Driving Begins With Radars

Venkatesh Rao
Predict
Published in
7 min readJul 26, 2020
Photo by Michael Dziedzic on Unsplash

Autonomous Vehicles have to continuously perceive its fast changing surroundings in order to act swiftly. Thus, perception is by far the most crucial step in self driving as it sets the course for further actions. So it’s no surprise that the industry spends the bulk of its time improving perception systems.

Sensors —the gateway to perception

The first stage of every perception system begins with a sensor. Typically a single camera or a handful working together to create a surrounding view. We have spent decades, trying to teach computers to perceive like we do. And while we still haven’t solved all the problems, they are certainly receiving attention from some of the best minds in the industry.

Camera is passive

The camera is a passive sensor. Light reflected off the surroundings reaches the camera’s lens, which focuses it on to the camera’s image sensor which interprets the light to form an image. This implies, if there’s no light or not enough reflected light, the camera is blind to the object. And there’s an inherent assumption here (nothing wrong with it) that we will always have an external light source — the sun, headlights, street lights etc.

Active sensors to the rescue

An active sensor has its own illumination source. It transmits pulses or waves of invisible light, that reflect off objects. The strength of the reflected light, time elapsed between transmission and reception, angle of reception etc. help identify the position, speed and other characteristics of the object in the field of view.

There are two types of active sensors used in today’s self driving prototype vehicles — lidars and radars. Lidars have received an enormous amount of media attention, VC funding and the talent pool in recent years. The bar to entry is low, demand is high, there are enough problems to be solved and the race is still on in the lidar market. And in this glamorous rush, what we haven’t talked about enough are radars.

Radars have been in use in the automotive industry for decades. Their typical use case involves various kinds of collision warning and collision avoidance systems. Radars have been shipping in volume for quite some time and are available from all the major automotive suppliers. This literally makes them a commodity.

The untapped promise of radars

The radars of today are not very useful when it comes to solving full self driving. But this is not a fault or a bug, rather it’s by design. We designed radars to accurately detect large metal objects, i.e. other vehicles on the road. Radars are excellent at detecting and warning you about vehicles in your blind spot, the one right in front of you and the ones you are about to hit while backing up. Although this is all very useful, self driving demands a more nuanced picture of the world around the vehicle.

And it is my strong belief, that radars can help build better perception systems — starting right now.

And here’s why.

1. At par with lidars

If not better, radars are mostly at par with lidars, when it comes to utilizing the sensor data to build perception systems. Lidars may provide more accurate range estimates which in turn might improve object classification. But it’s certainly not an order of magnitude difference.

2. Cost Effective

It’s no secret that the automotive industry is cost sensitive. And as it stands today, radars are certainly an order of magnitude cheaper than lidars. This means that you can afford to have more radars per vehicle leading to better sensing and thus better perception.

3. Adverse weather

Radar is the only sensor that continues to work reliably in snow, dust, fog and rain.

4. In production

Radars have been shipping in cars for decades. We can tap their potential today and not have to wait for some arbitrary industry cost and technology curves to cross thresholds.

A new radar for full self driving

Radars clearly have a promising future. But to bring this to reality, they will have to evolve. And evolve significantly. The best step forward is that radars are designed in a very close collaboration between perception developers and radar experts.

So, what does the next generation radar have to satisfy, in order to make a meaningful contribution to solving the full self driving problem?

1. Higher Resolution

In order to detect large metallic objects, the coarse grained resolution of today’s radars are more than sufficient. But to correctly detect and classify a large range of objects — cars, bikes, pedestrians, railings etc. — radars have to amp up their resolution. This can be either achieved by increasing the number of physical antennas and/or by playing around with the phase of the transmitted signal.

2. View in Elevation

Radars today mainly see in the azimuth or horizontal plane. They aren’t configured to view in the vertical plane. This is critical to distinguish between an obstacle on the road and the overhead bridge. This is done by adding new antennas that transmit and receive in the vertical plane.

3. Reduced Radar Noise

Radar data is inherently very noisy. This makes it hard to decide if the vehicle should indeed brake for an obstacle or simply drive by the obstacle. Besides improving existing radar signal processing algorithms, we can add context awareness to improve detections. For instance, if the GPS co-ordinates suggest the presence of a large road sign, you know the vehicle can drive by safely. I have a feeling Tesla is already doing this. Because they said so.

4. Tap into Raw Radar

Traditionally, radars output a list of objects over a CAN bus and/or a list of raw detections or reflections over ethernet. Think of these detections as the output from the (n-1)th stage of the radar processing pipeline, while the object list is the output from the last stage. What can a perception developer do, if provided access to the data at the (n-2)^th stage or any other earlier stage? Opening up the traditional radar signal processing flow for new ideas will no doubt lead to a huge improvement in overall perception.

5. Dynamic Programming

This concept extends to all sensors and not just radars. When we drive, our gaze wanders between the distant horizon to what’s immediately in front/back/left/right as per the driving needs. We squint our eyes, tilt and turn our heads and change our focus to get the best view of the most important things. Then why should radars (and other sensors) always maintain a constant field of view, focus? The ability to program sensors dynamically will be a huge boost to the perception system.

6. ML, DNNs

You guessed it right. Traditional radars were never built to run their outputs into a machine learning algorithm or a DNN. What kind of a radar data stream is best suited to be processed by a DNN? Are there untapped opportunities to improve perception metrics, reduce noise, increase accuracy? The folks at Waymo might know a thing or two for sure.

So far, we have only talked about perception. There are other use cases where radars prove effective as well.

Mapping & Localization

Radars are also being used to create maps and not just for things on the road. For instance, WaveSense is a startup using radars to create a map of the environment below the road surface. Fused with GPS co-ordinates, such a map can be used to accurately predict the vehicle’s position.

Automotive supplier Bosch, is adding a ‘radar’ layer to traditional maps to create distinct road signatures. By comparing current road signature to the one previously captured in a map, vehicles can accurately localize. Such maps will be updated by vehicles in production.

In cabin sensing

Radars can also be used to sense occupants inside the vehicle, opening the flood gates to a number of safety and comfort features.

In Summary

Everyone is certain cameras are a must for self driving vehicles. It’s time to openly say the same about radars, irrespective of your opinion on lidars.

Do you share a different opinion? Please share your comments and thoughts in the responses below. Happy Self-Driving!

--

--

Venkatesh Rao
Predict
Writer for

Spouts of creativity from a Tech MBA in Silicon Valley.