High-Fidelity Sensor Calibration for Autonomous Vehicles

Woven Planet Level 5
Woven Planet Level 5
7 min readAug 14, 2019

By Ashesh Jain and Lei Zhang, Engineering Managers; and Li Jiang, Software Engineer

Sensor calibration is one of the less talked about foundational building blocks of an autonomous vehicle (AV). In simple terms, sensor calibration informs the AV how its sensors are positioned on the vehicle within a high degree of accuracy. This allows the AV to understand the world and its position in the world from multiple sensors, such as LiDARs, radars, cameras, and inertial measurement units (IMU) by bringing their readings into a common coordinate frame. Accurate calibration information is critical to mapping, localization, perception, and control. It serves as the requisite pre-processing step before the sensor fusion deep learning algorithms are engaged and enables machine learning models to understand how a region of the world looks from the perspective of different sensors.

Calibration as a pre-processing step before AI algorithms. Accurate sensor calibration can dramatically simplify downstream machine learning tasks.

In this post, we will share our perspective on the importance of sensor calibration for AVs and give an inside look on what the calibration process looks like, and the tradeoffs involved.

A means to an end

The typical AV sensor suite consists of cameras, LiDARs, radars, and IMUs. The goal of calibration is to find the transformations between these sensors and the vehicle. These transformations need to be estimated up to a high degree of accuracy — within a few millimeters and milliradians. In addition, the calibration also recovers the intrinsics of the sensors, e.g. the lens distortion of the cameras [1], or bias in the accelerometer and gyroscope of the IMU. This information is an essential input to perception, mapping, localization, and control modules on the AV.

Sensor calibration is a central piece in AVs. Low-quality calibration can lead to the proverbial “garbage-(data)-in & garbage-(results)-out”

Perception

AV Perception systems identify agents on the road, such as cars, pedestrians, and cyclists. They combine data from multiple sensors to obtain a high-precision output needed to support the AV. For example, the perception system will combine the detection of a pedestrian from a camera with the detection of the same agent with LiDAR. Taking this one step further, for an agent at a distance of 100 meters from the AV, a calibration accuracy of ~0.2 degrees in the rotation is needed to reliably fuse measurements from multiple sensors. This is why calibration is critical to accurate perception and AV function.

Projection of point cloud on the camera image, with points color-coded by their distance. (Left) Good calibration: the point cloud aligns well with the edges in the image. (Right) Bad calibration: Same image but with a calibration error of 1.8 degrees. The edges in the image do not align well with the edges in the LiDAR point cloud.

Mapping

Level 4 autonomous driving requires High Definition (HD) maps, which are typically built by a fusion of LiDAR, camera, and IMU data. These HD maps contain precise location information of the world semantics, such as lane boundaries, traffic lights, stop signs, etc. Building an HD map usually involves inferring semantics from camera images, and correlating them against LiDAR data in order to accurately determine their position in the world. Therefore, accurate sensor calibration is a prerequisite to high-fidelity mapping. For example, when an AV navigates through a traffic intersection, it uses the calibration information to match traffic lights against the high-fidelity map so it can decide whether to yield to a certain traffic light or not. If the calibration were to be off by just a few degrees, the AV could potentially confuse the red light in its lane with the green light in the next lane.

Localization

The goal of localization is to accurately and precisely estimate the position of the vehicle on the HD map in real time. A high-quality localization system usually combines data from the IMU, LiDAR(s), wheel odometry, and cameras for this purpose. In a nutshell, the localization module correlates the motion of the AV across different sensors in order to obtain a reliable estimate of the AV’s actual position on the map. Here, localization accuracy is highly dependent on the accuracy of calibration between IMU and LiDAR, i.e. the rotation and translation components from IMU to LiDAR. Inaccuracies in this calibration could manifest themselves in the AV not knowing precisely where it is on the road. For example, the AV could estimate its position to be in the rightmost lane, while its true position is the middle lane. Not knowing its position accurately on the road could lead to following incorrect rules-of-the-road.

Level 5’s philosophy on calibration

One choice we had to make in the early stages of the system development is whether to focus on factory calibration or online calibration. Factory calibration leverages a well-structured environment with many markers that can be easily detected by the sensor, which can lead to highly accurate and reliable calibration. On the other hand, online calibration is more scalable, but it may not be as reliable when the vehicle is in the field. While we believe both are important, at the beginning of our program, we first concentrated on creating a well-understood, reliable factory calibration. This serves as the invaluable ground truth for developing online calibration algorithms.

Factory calibration: a vehicle is parked on a car-turner surrounded by calibration targets.

Maintaining the calibration of an AV fleet required us to build a significant amount of tooling. And as our fleet size grew, we needed the ability to interpret calibration results quickly. We did this in two ways:

  • First, we divided the calibration of the entire AV into calibrating pairs of sensors independently, and then applied the combined final solutions to the entire AV. This kind of approach allows us to put sanity checks and debug information at the output of each stage of the calibration, as necessary.
  • Second, we provided tools to easily pull the debug information from any stage of the calibration process to root cause any issues seen in the factory results.
Visualization tool used to check the accuracy of LiDAR segmentation for each target plane.

At Level 5, we need to identify the root cause of issues quickly. For example, if an AV drifts to one side of the lane, we need to know whether this is due to an error in localization, mapping, or calibration. It is therefore extremely useful to be able to instantly spot calibration inaccuracies. We achieve this in three ways:

  • First, we use a validation process for every factory calibration to ensure that each calibration file has an acceptable calibration error.
  • Second, we monitor dashboards to track the calibration and validation metrics of every vehicle.
  • Third, we utilize online calibration diagnostics to continuously monitor calibration quality of vehicles in the field.
Our calibration system consists of factory calibration and validation, online calibration, and cloud infrastructure.

Today

Our AVs use multiple cameras, LiDARs, radars, and an IMU. As we mentioned earlier, we prefer breaking down the whole vehicle calibration problem into paired sensor calibration problems. So, we first calibrate each individual sensor’s intrinsic parameters, then independently calibrate their extrinsic relative to the top LiDAR. This allows us to identify and isolate any calibration error, and debug calibration quality independent of the number of sensors deployed.

Sensors used on the current Level 5 vehicle, including LiDARs, cameras, radars, and IMU.

After several iterations, we are happy to have finally developed a factory calibration process that leads to repeatable, accurate, and scalable calibration. During this factory calibration, an operator drives our AV onto a car-turner. The car-turner is surrounded by calibration targets arranged in such a way that they cover the camera and LiDAR’s entire field of view. The car then rotates on the turner while an onboard software system scans the targets and calibrates all the sensors. This high-quality, fully automatic process enables our fleet operators to independently conduct the calibration process in a repeatable manner.

Automated calibration on a car turner

However, there are additional complications. As our AVs operate in the field, calibration accuracy can drift over time because of factors such as temperature and vibrations. Our next step is to develop online, self-correcting calibration algorithms, but this is particularly challenging because of the lack of ground-truth. At the moment, we solve this by using the highly accurate factory calibration as ‘ground truth’. In addition, we have instrumented our onboard software stack with a system that can automatically detect when calibration is no longer pristine and estimate the drift. Using this process, any errors in calibration are identified and the vehicle is returned to base for factory calibration if the errors are not acceptable.

Future

As we scale our fleet and expand our operational design domain (ODD), we see several challenges ahead: We need to be able to calibrate vehicles at a higher reliability and faster rates while maintaining calibration in a wider variety of operating environments. These challenges present great opportunities to explore new system designs and try cutting-edge techniques in computer vision, deep learning, and robotics. And at Level 5, we have a lot of ideas on how to do this.

If this post caught your interest, we’re hiring! Check out our open roles here if you’re interested in joining the team to want to take on a part of the self-driving challenge, and be sure to follow our blog for more technical content.

References:

[1] Z. Zhang, “A flexible new technique for camera calibration,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, Nov. 2000

--

--

Woven Planet Level 5
Woven Planet Level 5

Level 5, part of Woven Planet, is developing autonomous driving technology to create safe mobility for everyone. Formerly part of Lyft. Acquired July 2021.