LIDAR vs. Camera — Which Is The Best for Self-Driving Cars?

Vincent T.
0xMachina
Published in
6 min readSep 15, 2020

--

The on going debate among self-driving car industry experts is whether LiDAR (Light Detection and Ranging) or cameras are the most adequate for SAE Level 4 and Level 5 driving. The argument is whether or not to use LiDAR with camera systems, or just use camera systems without the LiDAR. LiDAR’s proponents include Waymo, Cruise, Uber and Velodyne. Tesla has been the least supportive of LiDAR, in favor of camera systems. Which is the best solution? The answer is which is better at identifying objects and recognizing them. Self-driving cars need to have a way to identify what it sees on the road. Is one better than the other?

A self-driving car using Cruise Automation LiDAR (Source Wikipedia)

Tesla seems to be outnumbered, with more companies and developers in support of LiDAR. Elon Musk has explained his reasoning for not including LiDAR with his Tesla car models. You won’t find LiDAR on the Model 3 or Model S, but you will see it on Waymo’s robo-taxis. So far, neither technology has been universally accepted as the solution for self-driving cars since there are no fully autonomous vehicles on the road that have achieved Level 4 or Level 5 status (as of this writing in September 2020).

A Tesla Model S uses Autopilot self-driving feature with camera systems (no LiDAR).

LiDAR

The use of LiDAR is not exclusive to self-driving cars. It has a variety of applications, including meteorology, seismology, geology and atmospheric physics among others. LiDAR uses pulses of light to detect the objects, much like how radar works using radio waves. These pulses can determine the distance and range of an object, providing much needed data to self-driving cars. For example, to avoid a collision, the LiDAR can detect the distance to an object and apply the brakes to slow down the vehicle. LiDAR has been a proven technology in measuring distance and it is for this reason engineers have used it for different applications including self-driving cars.

LiDAR can help self-driving cars create a visual map based on the reading it receives from the light pulses. The LiDAR system sends thousands of pulses every second to create a 3D map using on-board software to provide the car with information about its surroundings. This provides a 360 degree view that helps the car drive in any type of condition. LiDAR is used in coordination with cameras in self-driving cars, so they are not a standalone solution in itself.

LiDAR can create a visual map of its surrounding (Source: Automotive World)

Cameras

If you were to drive like a human, then visual recognition of objects is the way to go. That is the main argument for using camera systems. Cameras provide images which software using AI can analyze with a high level of accuracy. The cameras on Tesla models are used by its Autopilot self-driving feature to provide a 360 degree view of its surrounding. It is all visual and does not rely on ranging and detection like LiDAR.

Instead of light pulses, the cameras use visual data returned from the optics in the lens to an on-board software for further analysis. With the developments in neural networks and computer vision algorithms, objects can be identified to provide the car information while it is driving. This helps the car avoid collisions, slow down when there is traffic, safely make lane changes and even read the text from signs on the road or highway using OCR (Optical Character Recognition). So far Tesla has shown that self-driving cars can perform without LIDAR by using cameras.

Tesla Vision system using cameras with Autopilot (Source: Tesla)

The PROS

Elon Musk hails cameras as the most reliable type of visioning system. It has a better advantage at visual recognition, working with AI to identify objects on the road. It can also read text from road signs, which is important in case of situations where the self-driving car must be aware of detours and road work ahead. Cameras on Tesla’s cars use optics combined with computer vision that provides computational imaging that continuously analyzes the images on camera.

LiDAR systems plot out points on a virtual map in real time, using light pulses. Autonomous vehicle or self-driving cars can use this data to safely navigate and avoid hitting objects. Being able to determine objects and their distance is a strong point for using LiDAR. LiDAR systems with a high level of accuracy and reliability can improve safety which is one of the main talking points about self-driving cars.

The CONS

LIDAR has been hailed for being able to see objects even in hazardous weather conditions, but it is not always reliable. LiDAR is affected by wavelength stability and detector sensitivity. The laser’s wavelength can be affected by variations in temperature while poor SNR (Signal-to-Noise Ratio) affects the sensors in the LiDAR detector. LiDAR is also more expensive and requires more space to implement on cars, thus it tends to make self-driving cars look bulkier. Another issue with LiDAR is visual recognition, something that cameras are much better at. LiDAR requires much more data processing in software to create images and identify objects.

Cameras, while more reliable as a visioning system, don’t have the range detecting feature of LiDAR. While cameras are good in imaging, as a standalone system it probably won’t be sufficient. This is why Tesla also uses other sensors including radar to detect range and distance. Critics say that cameras still cannot see well enough to avoid danger, especially when weather conditions are involved. They need to be able to accurately see in any type of condition, like a human driver.

Safety is going to be a primary concern if self-driving cars are ever to hit the road legally. Fatalities have already occurred with self-driving cars using both systems. Tesla drivers using Autopilot have been involved in accidents, including fatalities on US highways. Other times it was due to distracted driving, as these cars are not fully self-driving and still require the driver’s attention. Uber became the news in 2018 when one of its self-driving cars hit a pedestrian who died on the scene. These cars are not yet fully self-driving so there could also be neglect on the drivers part. These are the pitfalls in the development of self-driving cars.

CONCLUSION

If safety is our primary concern, then sensor fusion incorporating the best elements of LiDAR and camera systems will be necessary. A combination of LiDAR and other sensors, including cameras can offer a lot in terms of public safety. LiDAR may become unnecessary if the visioning system (e.g. software, sensors) becomes more accurate and reliable for public safety. After all, one of the main reasons for self-driving cars is to minimize accidents caused by human error.

The common factor between LiDAR and cameras in self-driving cars is the software. Both systems use AI techniques like machine learning and neural networks to analyze data. As the algorithms get better, the results should also lead to more accuracy in identifying objects and allow self-driving cars to make better decisions. That could spell the difference between an accident and safe driving.

This is not a simple problem with a simple answer. Machines are not wired to think the same way as humans when it comes to critical decisions. That requires more data and training for software developers to improve upon. The current infrastructure will probably need to be modified as well, to accommodate self-driving cars (e.g. V2X). Until self-driving cars have shown consistent data pointing to the use of one technology over the other, then the debate remains open.

--

--

Vincent T.
0xMachina

Blockchain, AI, DevOps, Cybersecurity, Software Development, Engineering, Photography, Technology