LiDAR: The Eyes of an Autonomous Vehicle

Vedaant Varshney
The Startup
Published in
6 min readSep 12, 2019

Imagine a future where you can open up an app on your phone and request for a car to come to your exact location. Now imagine that when it pulls up, it has no driver. This future might not be as far as you think, and a large part of that is because of something called LiDAR.

To gain an understanding of LiDAR, think of a bouncy-ball, and now picture throwing it at a wall with your eyes closed. Based on the direction you threw it and when the ball inevitably smacked you back in the face, you can roughly tell where on the wall the bouncy-ball hit. This is kind of how LiDAR works.

LiDAR stands for Light Detection and Ranging, and the technology is used to map its surroundings with a 3D image.

The image is created by calculating the distance between a laser beam pulse (your throw) and the reflected object (the wall) by measuring the time it takes for the light to be reflected onto a receiver (your face).

Figure 1: Example of how LiDAR works and the distance calculation required
Figure 2: 6 Degrees of Freedom

Now imagine this process, but the lasers are being pulsed 1,000,000 times a second, with the device being able to rotate and direct beams at all six degrees of freedom! (Roll. Pitch, Yaw, Up/Down, Forward/Backward, Left/Right)

What does a LiDAR map look like?

LiDAR systems map out their surroundings, creating a point in a 3D world wherever the laser beams hit. Keeping with the bouncy ball analogy, imagine throwing millions of bouncy balls around the room, and creating a mental image of where all the walls are based upon when and where they hit you. As the system continues to scan its surroundings something called a point cloud is created (seen in Figure 3).

Figure 3: Point Cloud map created by LiDAR technology

Well now that we’ve established that LiDAR is a great way to virtually map out our surroundings, what exactly can we use this awesome technology for?

LiDAR is most commonly associated with being used in autonomous vehicles (AVs) today, but it’s also prominent in more scientific applications such as detecting biomass level changes in forests and seafloor elevations. In fact, LiDAR is also being used on police vehicle speed monitors, something that we hopefully won’t need once autonomous vehicles become widespread!

For our purposes, let’s focus on the use of LiDAR with AVs.

When it comes to a vehicle sensing its surroundings, a computer can’t solely use a typical camera to drive autonomously (reliably), and needs precise and real-time visualizations of the surrounding area. LiDAR provides depth that can’t be detected with a camera, while also having a full 360° horizontal view, and being accurate for distances up to 200 meters away.

Figure 4: Rotating LiDAR sensor on a quadcopter

Even though LiDAR is great, the accuracy of an AV’s visualization of the world can be improved if LiDAR data is ‘fused’ with traditional camera information. Sensor fusion allows for a clearer image that can be sent through deep learning algorithms to identify various objects, such as other cars, people, cones, or trees.

The Future of LiDAR Technology

As it stands, LiDAR will continue to be used as a major part of AVs’ sensing systems for the coming years; but as self-driving cars improve, LiDAR tech must improve as well.

Well, that is the accepted view for most companies and experts in the field, except for Elon Musk.

Elon believes that autonomous vehicles should focus on passive optical image recognition (POIR), or only using a camera based system. While not entirely unreasonable, the POIR system could be more demanding on hardware and far less precise than LiDAR due to a lack of depth perception.

Figure 5: This car was trained solely with a reinforcement algorithm and the use of a camera as a sensor. This video only shows a basic implementation of POIR, but the system has a potential for growth in the future.

How could LiDAR look in the future?

Right now, the most challenging aspect of LiDAR would be its price for implementation, with Velodyne’s LiDAR systems costing $75,000 for a standard model, and an $8,000 ‘budget’ model. Being the most expensive component of an autonomous vehicle, reductions in LiDAR price will significantly increase the rate in which self-driving cars can be turned into viable consumer products.

The most significant developments regarding LiDAR involve Solid State implementations, with a lower amount of moving parts. By converting the current LiDAR design into one with no spinning sensors, the future could hold far more compact and affordable light detection products.

Figure 6: Once sufficiently developed, LiDAR systems could be small enough to slot into the grille of an autonomous vehicle.

Engineers are working towards this issue by creating Solid State LiDAR devices, but they’re still confused on which approach to take. There are three main options:

  • Flash LiDAR, which uses a single light source to cover the entire view which a mechanical LiDAR would, and has a sensor which can map out a point cloud almost instantly.
  • Phased LiDAR, which employs phased arrays, a technology borrowed from radars to point the light in required directions instead of rotating an entire mechanism, using thousands of individual lasers
  • Micro-mirror LiDAR, though not completely Solid State, focuses on a single beam which is redirected through the use of a small rotating mirror.
Figure 7: Micro-mirror, or microelectomechanical mirror LiDAR systems essentially spin a mirror to redirect the LiDAR beams, rather than using a mechanism which rotated the transmitter.

However, some of these Solid State solutions comes with its own set of quirks.

Flash LiDAR needs to find the perfect balance for the flash’s strength, as it needs to match performance and safety standards, as too intense a beam may cause eye damage. Additionally, the receivers currently used by mechanical LiDAR are not suitable for dealing with the frequency of light used in Flash systems; and compatible receivers can cost more than a full mechanical LiDAR system.

Figure 8: Diagram representing a Flash LiDAR pulse, as one singular beam of light.

Phased LiDAR seems like a more reasonable option at the moment, as this system has been developed to be far smaller and cheaper than mechanical LiDAR. Prices are currently at $800 for Quanergy’s LiDAR. Phased Array is not without its problems though, as aligning the chip stack the LiDAR is composed of requires extreme precision. While solutions are being worked on, the method is far from perfect.

Figure 9: How phased array systems work in LiDAR, to point the rays in a certain direction.

Micro-mirror LiDAR — more formally known as microelectomechanical system (MEMS) LiDAR — can result in extreme reductions in form factor for light detecting sensors. Due to only needing to move and modify the mirror, there is essentially only one moving part. LiDAR systems require intensive calibration and need to stay precise, but MEMS solutions are more fragile than their counterparts, leading to a required re-calibration after an autonomous vehicle drives fast on a relatively bumpy road.

What we learned through all this is that no LiDAR system is perfect at the moment, and that we’re still a long way from creating the ideal light detection device.

It might take three or four years, but Solid State LiDAR is quickly advancing, and should soon become more accessible and more precise than mechanical equivalents.

When that day comes, we’ll be another step closer to realizing the vision of an autonomous-driven future; but for all we know, we could be inching ever closer to the robot revolution…

Please feel free to contact me through email for any inquiries or corrections in the article. Feedback is always appreciated as well!

E-mail: vedaant.varshney@gmail.com

--

--