The practical guide to 3D hand-eye calibration

Øyvind Borgan
Zivid
Published in
4 min readMar 2, 2020

This article is a shortened version. Read the full version here:

https://info.zivid.com/blog/the-practical-guide-to-3d-hand-eye-calibration-with-zivid-one

In this article, I provide some practical tips when using our Python sample code for 3D hand-eye calibration to help you achieve 3x improvement in translation error and 5x in rotational error.

The resources I’ve used are found at the end of the article.

In Zivid software version 1.6, we included a 3D hand-eye calibration API for our Zivid One+ 3D cameras. To further simplify the use of the hand-eye calibration, we provide a practical Python sample on GitHub.

Note: The example code uses Universal Robots’ (UR) UR5e, but you can apply the concepts to collaborative robots from ABB, Yaskawa, and others.

Why are we using UR5e?

Universal Robots is one of the leading collaborative robotics companies in the world. It was one of the first companies that developed and mass-produced commercial collaborative robots. Since 2005 UR has sold more than 42,000 robots and has a market share of 50% in the collaborative robot’s market.

The UR5e is a lightweight collaborative robot that is part of the new robot series from Universal Robots.

What is hand-eye calibration?

You may wonder, what is hand-eye calibration by the way? You can read more about the concepts of 3D hand-eye calibration here, but in short, hand-eye calibration is the binding process between a camera and a robot.

For example, you may want to integrate a Zivid One Plus 3D camera with a robot arm to provide a complete bin-picking or pick and place solutions.

There are two types of hand-eye calibration methods:

  • Eye-to-hand is used when the camera is stationarily mounted next to a robot.
  • Eye-in-hand is when the camera is mounted on the robot.

Both calibration methods are supported by our API and is available in the sample code.

Hand-eye calibration process.

Getting the hand-eye calibration right is critical for the system to perform correctly, so it is important to do it properly.

The calibration steps include:

  1. Move the robot to a unique pose
  2. Register the end-effector pose
  3. Image the calibration object (obtain its pose)
  4. Repeat steps 1–3 multiple times, e.g. 10–20
  5. Compute hand-eye transformation

With a set of point clouds and corresponding robot poses, our hand-eye calibration API computes the hand-eye transform. This will always be an affine transformation consisting of a homogeneous 4x4 matrix.

We also provide you with residuals/error in rotation and translation for every pose of your dataset. This way, you can easily verify that the calibration fits your requirements.

Tips and recommendations.

While developing the hand-eye API, we did a lot of testing. Below you’ll find some tips on capturing accurate 3D data.

The calibration object.

There are several different calibration objects used in hand-eye calibration varying from spheres to ArUco markers and checkerboards. The exact location of the calibration object is not important, but it must be static for all the robot poses.

Whether you are performing an eye-to-hand or eye-in-hand calibration, make sure that calibration objects stay in the exact same position relative to the robot base or robot tool.

We recommend using our 9x6 size checkerboards. They are available in the downloads section.

Image quality.

We use 3D data to do the hand-eye calibration, in contrast to OpenCV. The calibration object must be well exposed and in focus.

Capture Assistant in Zivid Studio is a tool that helps you capture correctly exposed, high-quality 3D point clouds of the checkerboards without manual adjustments.

Make sure that the robot is not moving while capturing images.

Tip: wait 2 seconds after the robot has moved to a new pose before capturing your next image.

Robot poses.

Robot positioning accuracy is not consistent along the robot’s workspace. Positions far away from the robot base is, in general, less accurate than close to the base. Therefore, the hand-eye result depends on which robot poses you set. To get as accurate hand-eye calibrations as possible, your robot poses should span the entire working area of your application. Poses should also be sufficiently distinct, using all the robot joints.

Tip: make sure that the calibration object is in the field of view of the camera for all the robot poses.

Environmental conditions.

Temperature, joint friction, payload, and manufacturing tolerances are factors that affect a robot. Accordingly, during hand-eye calibration, these factors such be similar to the working conditions of your application.

--

--