Aeva IPO Ushers in the Era of 4D LiDAR — Will 5, 6, or 7D LiDAR Be Far Behind?

Peter Stern
Voyant Photonics
Published in
5 min readMar 23, 2021

Two more LiDAR companies went public the other week through acquisitions by SPAC. Ouster has been producing and selling LiDAR systems for years, yet Aeva has yet to sell anything or even release a public spec sheet describing their product.

In their first few days of trading, Aeva has been out-performing Ouster by an impressive margin.

Ouster produces time-of-fight (TOF) systems, which measure distance by timing pulses, while Aeva’s devices use FMCW techniques. There is nothing new about FMCW as a concept or technique. It was used very early in the history of radar. Getting FMCW to work in LiDAR poses some challenges. Getting FMCW to work in small boxes on the roof of a car is outright difficult.

FMCW LiDAR, if you can get it to work, offers advantages over TOF LiDAR. FMCW can detect obstacles at longer ranges with less peak optical power, which means better range performance while remaining eye-safe. More relevant to a key value proposition in Aeva’s investor pitch, FMCW lets you measure doppler velocity at every point, instantaneously.

In addition to creating a range image, where each pixel in a scene is color-coded to show distance, FMCW can create a velocity image, where every pixel is color-coded to show velocity. Doppler velocity measurement is the extra “D” to the x,y,z, 3D position in LiDAR, yielding the moniker “4D LiDAR”

Opponents claim 4D LiDAR using FMCW is neither new nor advantageous. 3D TOF LiDAR data can be compared over time to provide not just doppler velocity, which is velocity in one direction, but velocity in every direction.

Aeva’s rebuttal is simple: FMCW provides doppler velocity directly, instantaneously, with no additional processing required, no delay, and no compounded errors due to sensor motion and other sources. While Aeva claims to be the first to put FMCW into a product available for ADAS applications, I don’t know that Aeva has ever claimed FMCW LiDAR is new.

If the relative performance of Aeva’s and Ouster’s IPOs or the number of the equity research calls I have been getting are any indication, 4D FMCW LiDAR is favored as a technology for the future over Time of Flight LiDAR already in production.

Why should we stop at 4D LiDAR?

Ours.tech and Blackmore, two FMCW companies that have been acquired by Aurora, have promoted 5D LiDAR. The extra D is calibrated reflectance. You can think of reflectance as “brightness.” Most LiDAR systems of any type can provide this data. Reflectance might indicate the type of surface in a scene. Imagine a “material surface image”, where every pixel is color coded to show asphalt, rubber, chrome, auto paint, cotton clothing on a person, or cement. Knowing the surface at a point in a scene, instantaneously, without any additional data processing, would go a long way in helping an ADAS perception stack identify objects faster from far fewer 3D pixels.

Time of Flight LiDAR systems can typically provide more points per second than an FMCW based 4D or 5D LiDAR system. When you learn more about a scene from fewer points, points per second becomes less critical. Companies in the FMCW camp have tricks and tradeoffs to get more points per second in small systems with very little additional cost.

Unfortunately, using reflectance alone is not a reliable indicator of material or surface type. Object orientation and surface variation can result in big differences in measured reflectance for essentially the same material. A mud splattered car parked at an angle and a clean car perpendicular to your sensor may not be recognizable as the same object in a reflectance image.

What is there is more information in reflected laser light useful in scene analysis, already available in and FMCW LiDAR system? What if we add a couple of more ‘D’s?

Voyant’s next LiDAR chip does just that. We are working on a 7D LiDAR, where the other two D’s measure additional qualities of reflected laser light. In effect, we can generate two additional color-coded images of a scene, beyond xyz position, doppler velocity, and calibrated reflectance. Comparing these two additional images will provide a much better indication of surface material than can be determined from reflectance alone.

Scene analysis and integrated perception stacks that use 4D data sets are immature, because 4D LiDAR systems are quite hard to come by. You can’t buy an Aeva or Aurora system yet, as far as I know, and other FMCW LiDAR systems are large and expensive.

Has anyone even thought about 7D LiDAR for machine perception yet? Or scene analysis algorithms that use relatively sparse position data coupled with accurate, instantaneous velocity and material measurements?

While Voyant’s devices provide a 7D data set, we don’t expect that to be directly useful for most applications. Instead, our fielded systems will collapse the last few ‘D’s’ back into a synthesized fifth ‘D’, a high quality surface material indicator, perhaps coupled with a confidence score.

Perception stacks will soon use extra ‘D’s to push past the barriers of existing LiDAR algorithms. Ouster offered some great posts on how many points on a target you needed to recognize various objects using LiDAR data, which I can’t find right now. Of course, Ouster’s analysis highlighted one of the strengths of their time of flight systems, points per second.

Back in the ’90s, we developed LiDAR algorithms that could accurately detect telephone lines a few meters above trees at distances of over a kilometer from only a few noisy pixels by exploiting the fact that there are very few horizontal lines in a natural scene.

Adding velocity and material, my hunch is that accurate object detection and scene analysis can be performed with far fewer points, and much faster than with 3D data alone. Do you really need dozens of points to outline a person? Perhaps you can identify a person with one or two pixels if you also can measure those two pixels are moving at 2.5 mph and are wearing denim.

At Voyant we are working hard to bring these sensors to the market soon. We are hiring a range of talents. Beyond our current job postings, if your work involves machine perception or real-time scene analysis with LiDAR, more specifically using data beyond x,y,z position, and want to learn more about our 5D or 7D LiDAR solutions, please get in touch!

--

--