Engineer Explains: Lidar

This is how robots see.

[1]

Imagine standing in a dark room, and the only way you can sense the environment around you is by reaching out to objects with a stick. First, you reach straight in front of you, and the stick goes 12 feet before hitting a solid object. Then you extend the stick to your right, 8 feet until it stops. Next you try to your left, and you get 12 feet. Behind you, the stick goes 18 feet. Now, even though you can’t see anything, and you haven’t moved, you have some information about the room.

If you repeated this hundreds, or thousands of times in different directions (and had a really good memory), you would be able to produce a rough representation of the room based on how far away objects are from you.

[1] 2D Scan of the Walls of a Room

If you angled the stick above and below horizontal, you would even be able to “see” objects around you like chairs and doors, based on their outlines. From this information, you could produce something called a “point cloud”, which is a set of points in a 3d coordinate system. With enough points, you could make a really detailed point cloud of the room, like this:

[2] 3D Scan of a Wall and Drinking Fountain

Lidar (a portmanteau of “light” and “radar” which also stands for Light Detection and Ranging) is a sensor designed to quickly build these point clouds. By using light to measure distance, Lidar is able to sample points extremely quickly — up to 1.5 million data points per second. This sampling rate has enabled the technology to be deployed on applications such as autonomous vehicles.

How it works:

Lidar measures the time of flight of a pulse of light to be able to tell the distance between the sensor and an object. Imagine starting a stopwatch when the pulse of light is emitted, and then stopping the timer when the pulse of light returns (from being reflected off the first object it encounters). By measuring the “time of flight” of the laser, and knowing the speed that the pulse travels, the distance can be solved. Light travels at 300 million meters per second (186,000 miles per second), so very high precision equipment is needed to be able to generate data about distance.

[3] Lasers as a Fancy Measuring Stick

To produce complete point clouds, the sensor must be able to sample the entire environment very quickly. One way that Lidar does this is by using a very high sampling rate on the individual emitters/receivers. Each one emits tens, or hundreds of thousands of laser pulses every second. That means, within 1 second, as many as 100,000 laser pulses complete a round trip from the emitter on the Lidar unit, out to the object being measured, and back to the receiver on the Lidar unit, near the emitter. Large systems have as many as 64 of these emitter/receiver pairs, or “channels”. Multiple channels enable the system to generate more than a million data points per second.

However, 64 stationary channels aren’t enough to be able to map an entire environment — it would just give very clear resolution in very focused areas. Making more of these channels is expensive due to the precision required in the optics, so increasing the number of channels above 64 just increases cost faster. Instead, many Lidar systems use rotating assemblies, or rotating mirrors to enable the channels to sweep around the environment 360 degrees. Common strategies include angling each of the emitters and receivers above or below horizontal to blanket more of the environment in the field of view of the lasers. The Velodyne 64 channel Lidar system, for example has a 26.8 vertical field of view (the rotation gives it a 360 horizontal field of view). From 50 meters away, this Lidar could see the top of an object which is 12 meters tall.

[4] Velodyne HDL-64E Lidar System

Below, you can see the clear bands of points corresponding to the different channels of the Lidar unit — bands in the point cloud — as the data fidelity drops off with distance. While it isn’t perfect, the higher resolution is available for closer objects, since the angle between emitters (for example, 2 degrees) results in an increased spacing between these bands as distance to the sensor increases.

[5] Point Cloud Generated by a Spinning, Multi-Channel Lidar System

Applications of Lidar Systems:

The point cloud can be used to reproduce 3d models of landscapes or environments. A few applications include:

· geological mapping/imaging to monitor erosion or other changes

· monitoring growth of plants and trees

· doing surveying work for construction projects

· making accurate volumetric estimates of landfills.

Probably the most common application, and one that you may have seen, is a Lidar system integrated in an autonomous vehicle — such as this episode of Top Gear in which a truck uses a Lidar system to autonomously navigate off-road.

[6] Autonomous Truck on Top Gear

Below, you can see the point cloud of the landscape, and additional features (green and red boxes that delineate between objects that can be driven over — like plants, and objects that shouldn’t be driven over — like rocks, trees and cars). There are other software elements that take in the raw point cloud, and categorize the obstacles.

[6] Autonomous Truck on Top Gear

Lidar systems have found their way into humanoid robots as well — as can be seen in this video from Boston Dynamics:

[7] Boston Dynamics Atlas

In the rest of the video (link below), the robot uses different sensors, like optical cameras to see the QR-like code in addition to the lidar system in the robot’s head.

Another example of a Lidar application is a sensor’s axis mounted horizontally on a drone to produce a contour map of the ground. Point cloud data from the Lidar is combined with position data on the drone itself to produce these contours.

[8] Phoenix Aerial Systems Drone Mapping the Ground

The Challenges:

Since Lidar is based on measuring the time it takes for a laser pulse to return to the sensor, highly reflective surfaces pose issues. Most materials have rough surfaces on a microscopic level, and scatter light in all directions. A small portion of this scattered light makes its way back to the sensor, and is sufficient to generate the distance data. If a surface is very reflective, however, the light is reflected coherently away from the sensor, and the point cloud is incomplete for that area.

The environment in the air can also have an effect on the lidar readings. Heavy fog and rain are also documented to pose issues for lidar systems by scattering, or otherwise attenuating the emitted laser pulses. In order to help alleviate these issues, higher power lasers are used, which are not good solutions for smaller, mobile or otherwise power-sensitive applications.

Another challenge about lidar systems is the relatively slow refresh rate of the spinning lidar systems. The refresh rate of the system is limited by how fast the complicated optics can rotate. 10hz (10 times per second) is approximately the fastest that the lidar system can rotate, hence, this is the limiting refresh rate of the data stream. A car moving at 60 miles per hour travels 8.8 feet in the 1/10th of a second as the sensor is rotating, so the sensor is essentially blind to changes that happen as it travels those 8.8 feet. Perhaps more importantly, the range of lidar (in perfect conditions) is 100–120 meters (less than 400 ft), which equates to less than 4.5 seconds of travel time for a car moving at 60 mph.

Perhaps the largest challenges for lidar to overcome is the high cost of the device. Although cost has been dramatically decreasing since the introduction of the technology, cost remains a significant barrier to adoption. For the mainstream automotive industry, a $20,000 sensor is not going to be accepted by the market. Elon Musk says: “I just don’t think it makes sense in a car context. I think it’s unnecessary.”

Finally, although we consider lidar a computer vision component, the point cloud representations are purely based on geometry. The human eye, in contrast, uses other physical properties like color and texture in addition to shape. A lidar system today can’t tell the difference between a paper bag and a rock, which should factor into how the sensor interprets and tries to avoid the obstacle.

The Opportunities:

There are still many opportunities for lidar within the intelligent machine ecosystem. Compared with using 2d images, a point cloud is much easier for a computer to be able to use to build 3d representations of the physical environment. While 2d images are the most easily digestible data for human brains, point clouds are the easiest for computer brains to interpret.

Scanse (www.scanse.io) has released a $250 2d lidar scanner called “sweep” which can be used outdoors, and is designed for mobile, low-power applications. At nearly a quarter of the cost of competitors, this will allow fundamentally new applications for the sensors (a phenomena we have seen for many other types of sensors as well). The 2d lidar can also be attached to a second rotary element to generate complete 3d point clouds of environments.

[9] 3D Environment Produced with Scanse Sweep

The Scanse Sweep is available for pre-sale until April 11th.

Other companies are pursuing other strategies for lowering system cost, such as Quanergy’s solid state Lidar. The system is principally the same as we have already explained above, however, as opposed to using spinning optics to move many beams, they use something called “phased array optics” to guide the direction of the laser pulses. The result is that the system is able to release one laser pulse in one direction, and the next pulse (a microsecond later), can be aimed somewhere else in the field of view. This allows for real-time focusing on areas where something seems to be moving, which is analog to how a human driver would focus attention on an obstacle as it is about to enter the roadway. The Quanergy system is designed to do this without mechanically moving at all, allowing it to sample around a million data points per second — on par with the speed of the 64 channel spinning Lidar counterparts, but at a fraction of the cost. An added benefit is that these sensors are more easily integrated with other components of the automobile like mirrors and bumpers.

[10] Prototype Quanergy Lidar System

On the other end of the scale, larger and higher power systems are being developed that can image the ground from an aircraft flying at 30,000 feet, with resolution good enough to be able to see vehicles on the ground. While these systems will be lower in demand and higher in cost, developments on this front will continue to lower the cost of the sensor technology as a whole.

Conclusions:

Lidar is only one of the many sensors that are used to give computers data about the physical environment, but the data that is produced is some of the easiest for the computer to interpret. And the sensors are getting cheaper, too. According to to Velodyne director of sales and marketing Wolfgang Juchmann, the cost of Lidar has decreased 10 fold in the past 7 years [11]. We are continually seeing new areas for potential application due to these price reductions.

In future articles, we will discuss some of the other advances in intelligent machine technologies that are driving this new industrial revolution.

References:

1 — https://www.kickstarter.com/projects/scanse/sweep-scanning-lidar

2 — http://www-video.eecs.berkeley.edu/research/indoor/

3 — http://www.rocksense.ca/Research/LiDARTechnology.html

4 — http://velodynelidar.com/hdl-64e.html

5 — http://pointclouds.org/documentation/tutorials/hdl_grabber.php

6 — https://www.youtube.com/watch?v=1pl_Pont_Zk

7 — https://www.youtube.com/watch?v=rVlhMGQgDkY

8 — https://www.youtube.com/watch?v=BhHro_rcgHo

9 — https://www.kickstarter.com/projects/scanse/sweep-scanning-lidar

10 — http://www.quanergy.com/

11 — http://articles.sae.org/13899/