Offline Camera Calibration in ROS 2

Tamas Foldi
HCLTech-Starschema Blog
4 min readNov 3, 2023

In applications like self-driving cars, robots and augmented reality systems, where precise and reliable data is needed for positioning and overall perception, calibration is crucial. “Garbage data in, garbage data out,” as people say in today’s data world — and you can easily imagine the potential consequences of “garbage out” in the case of an autonomous vehicle or drone.

The good old checkboard pattern

Preparing a computer vision pipeline almost invariably starts with camera calibration. When we calibrate a camera, we’re collecting information on its own features and how it sees the world. This includes knowing how far it can see and at what angle — for example, we cannot make good use of an image produced by a fisheye-like lens without knowing just how it distorts the image.

So, let me show you how I went about calibrating a camera in ROS 2 and how you can do the same.

Online Calibration

First, we have to collect the ROS 2 package for the calibration:

# on linux with apt
sudo apt install ros-<ros2-distro>-camera-calibration

# on mac with robostack
mamba install ros-<ros2-distro>-camera-calibration

Then, we have to ensure we have a camera driver compatible with our camera. I tend to reply in the following packages for these systems:

  • usb_cam on Linux using V4L2. It does not work on Mac or Windows.
  • opencv_cam on Mac (or Windows) using OpenCV to capture images from a camera or files.
  • video_source on Nvidia Jetson devices (supports both cameras and flat files).

We also need a pattern to use during the calibration. You can generate one easily here: https://calib.io/pages/camera-calibration-pattern-generator. To fit your pattern on an A4/US letter page, I suggest an 8x10 board with 15mm squares. Print it on thick paper to ensure it doesn’t bend during the calibration process.

An 8x10 checkerboard with 15mm squares is a good option.

To start calibration in one console, start the following command:

ros2 run camera_calibration cameracalibrator --size 7x9 --square 0.015 \
--ros-args -r image:=/image_raw

The size option here denotes interior corners (e.g. a standard chessboard is 7x7), so for an 8x10 checkerboard, we go with 7x9.

On the other console, start your camera capture:

# in case of opencv_cam driver
ros2 run opencv_cam opencv_cam_main

You should now see the calibration window and begin the calibration process. Next, move the pattern to all screen corners and tilt in every direction. When enough information is gathered, press the calibrate button.

Try to remember what you do here when switching to offline mode.

Now you can see your camera calibration data in the console. You can simply save it in a file with ini extension or press the save button in the app to save the same in a tarball with both ini and yaml format.

An example of calibration output for my MacBook camera. Save the contents from image till the end to an ini file.

Offline Calibration

The process will bethe same as above, but instead of using your camera image, ingest video files to the /video_raw topic. Most camera drivers can do this, but I prefer opencv_cam due to its Mac support.

First, record a video of you dancing in front of your camera:

Calibrating my car’s cameras is always a fun.

Then, load the video files using opencv_cam

ros2 run opencv_cam opencv_cam_main --ros-args -p file:=true \
-p filename:=/../TeslaCam/RecentClips/2023-11-01_15-07-10-front.mp4

Viewing the Results

You can also use the same opencv_cam command to replay your recordings, but this time, you should also specify the camera_info_path argument for feeding data to the /camera_info topic. This topic uses the CameraInfo message to tell upstream systems how to interpret the images from video frames.

The easiest way to visualize the difference between the calibrated and non-calibrated images is to open two image windows in Foxglove, one with a calibration topic and one without it. The image on the left is the raw image, while the right is the one we can use for further processing.

You can easily compare calibrated images and video streams side by side before and after calibration.

Summary

The calibration process I described here is a necessary step in a computer vision pipeline, especially if you will use the images for localization or mapping. ROS 2 supports this project out of the box with a calibrator, camera info messages, services and visualization tools.

--

--

Tamas Foldi
HCLTech-Starschema Blog

Helping enterprises to become more data driven @ HCLTech, co-founder & former CEO @ Starschema