Gentle Introduction to Global Point Cloud Registration using Open3D

Amnah Ebrahim
7 min readOct 22, 2023

--

This tutorial is in continuation to the following articles:

In this article we will be:

  • Looking at what Global Registration is
  • Work with Global Registration methods

Recap of Our Last Article..

In our previous article we touched on the main differences between global and local registration. During the registration between the point clouds, common algorithms such as Iterative Closest Point (ICP), do not converge to the global error minimum and instead converge to a local minimum. Therefore, local registration methods require and relies on approximate alignment as an initial step that can be offered through a global registration method.

Global registration algorithms do not depend on alignment for initialization. Typically, they produce less precise alignment outcomes but make aid in converging to the global minimum and hence are employed as the initial step for local methods.

In this article, we will also be using this helper function that can be found in the Open3D documentation to visualize the transformed source point cloud together with the target point clouds:

def draw_registration_result(source, target, transformation):
source_temp = copy.deepcopy(source)
target_temp = copy.deepcopy(target)
source_temp.paint_uniform_color([1, 0.706, 0])
target_temp.paint_uniform_color([0, 0.651, 0.929])
source_temp.transform(transformation)
o3d.visualization.draw_geometries([source_temp, target_temp],
zoom=0.4559,
front=[0.6452, -0.3036, -0.7011],
lookat=[1.9892, 2.0208, 1.8945],
up=[-0.2779, -0.9482, 0.1556])

Global Registration

Now let’s try and understand how global registration is trying to “register” or stitch two different point clouds at different “poses” into the correct global minimum space.

In the context of point cloud data (PCD), a “pose” typically refers to the position and orientation of an object in 3D space. This is often represented by a combination of translation (3D coordinates) and rotation (3 angles), forming a 6 Degree of Freedom (6-DoF) pose.

To do so, we must find “pose-invariant features” as they assist in matching points together that have matching features but different poses/orientation to reach the global minimum of the error space. Pose Invariant features are features that are invariant and independent of the pose/orientation.

In this tutorial, we will be looking at Fast Point Feature Histograms (FPFH).

Fast Point Feature Histograms (FPFH)

FPFH provides a way to describe the local geometry of a point in a 3D space by looking at its neighboring points. It looks at each point and its nearby neighbors, then calculates a histogram (a kind of chart that shows the frequency of different values) based on the angles between these points. This histogram serves as a “fingerprint” that captures the local geometry around each point that will be used when trying to find matches of the same point in the target point cloud.

def preprocess_point_cloud(pcd, voxel_size):
print(":: Downsample with a voxel size %.3f." % voxel_size)
pcd_down = pcd.voxel_down_sample(voxel_size)

radius_normal = voxel_size * 2
print(":: Estimate normal with search radius %.3f." % radius_normal)
pcd_down.estimate_normals(
o3d.geometry.KDTreeSearchParamHybrid(radius=radius_normal, max_nn=30))

radius_feature = voxel_size * 5
print(":: Compute FPFH feature with search radius %.3f." % radius_feature)
pcd_fpfh = o3d.pipelines.registration.compute_fpfh_feature(
pcd_down,
o3d.geometry.KDTreeSearchParamHybrid(radius=radius_feature, max_nn=100))
return pcd_down, pcd_fpfh

Open3D has created a preprocessing function called “preprocess_point_cloud”. It does the following:

  • Downsamples the Point Cloud using Voxel Downsampling:

It first downsamples the input point cloud data (pcd) using a specified voxel size (voxel_size) to reduce the number of points in the point cloud, which can help speed up subsequent steps especially computing the FPFH feature of each point.

If you’d like to learn more about voxel downsampling read this.

  • Estimate Normals of Each Point:

The main reason we estimate the normals for the point cloud is that the normal of each point is crucial information for the FPFH algorithm where it requires every point in the point cloud “p” and the surface normal, n, at that point, as well as relevant information from p’s nearest neighbors.

The normals are estimated using a KDTree search with a search radius of 2 * voxel_size and maximum nearest neighbors set to 30.

Source: https://ciis.lcsr.jhu.edu/lib/exe/fetch.php?media=courses:446:2016:446-2016-06:dadler_fpfh_presentation_final.pdf
  • Applying FPFH

The FPFH features are computed using a KDTree search with a search radius of 5 * voxel_size and maximum nearest neighbors set to 100.

Now…Import the dataset and Tranform Point Cloud Data Initially

To visualise the effect of Global Registration on two point clouds let’s import the data we used in our previous tutorial and tranform and visualise both point clouds using a random initial transformation.

import  numpy as np
import open3d as o3d

#Read Source and Target PCD
demo_pcds = o3d.data.DemoICPPointClouds()
source = o3d.io.read_point_cloud(demo_pcds.paths[0])
target = o3d.io.read_point_cloud(demo_pcds.paths[1])

trans_init = np.asarray([[0.0, 0.0, 1.0, 0.0], [1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]])
source.transform(trans_init)
draw_registration_result(source, target, np.identity(4))

Two Point Clouds Visualised Before Global Registration

We clearly see that there is a mismatch between the point clouds especially around the chair area. So let’s apply the FPFH algorithm to both source & point clouds.

source_down, source_fpfh = preprocess_point_cloud(source, voxel_size)
target_down, target_fpfh = preprocess_point_cloud(target, voxel_size)
Output of the FPFH function

Estimating the Transformation that Gets us closer to the Global Minimum Space:

In this tutorial, we will be using RANSAC (Random Sample Consensus) to reach the global minimum space.

But first.. what is RANSAC?

RANSAC is an effective algorithm for dealing with outliers in data. It works by randomly selecting a subset of data points, fitting a model to these points, and then scoring the model based on how many other points fit it. This process is repeated multiple times, and the model with the highest score is selected. The selected model’s points are considered inliers, while the rest are outliers.

This method is often used in tasks like estimating camera motion in visual odometry, where corresponding points in pairs of consecutive images are used to compute the camera’s relative orientation. The number of trials needed to succeed with a certain probability can be calculated using a specific formula.

Here, RANSAC is used for global registration. In each RANSAC iteration, random points defined by ransac_nare picked from the source point cloud. Their matching corresponding points in the target point cloud are detected by searching the nearest neighbor in the FPFH feature space that we calculated above.

Pruning

Since the FPFH is a 33-dimensional space, we might need the help of pruning to reject and eliminate incorrect matches. Pruning is a technique that reduces the size of a trained model by eliminating some of its parameters. The goal of pruning is to create a smaller, faster, and more efficient model while maintaining its accuracy. It can be particularly useful for large and complex models, where reducing their size can lead to significant improvements in their speed and efficiency.

In this tutorial we use:

  1. CorrespondenceCheckerBasedOnDistance: This algorithm checks if the aligned point clouds are close, i.e., the distance between them is less than a specified threshold.
  2. CorrespondenceCheckerBasedOnEdgeLength: This algorithm checks if the lengths of any two arbitrary edges (lines formed by two vertices) individually drawn from source and target correspondences are similar. In other words, it checks that:

∣∣edgesource​∣∣>0.9⋅∣∣edgetarget​∣∣ & ∣∣edgetarget​∣∣>0.9⋅∣∣edgesource​∣∣ are true.

Only matches that pass the pruning step are used to compute a transformation, which is validated on the entire point cloud.

RANSAC

After pruning, we match the points in the point cloud that pass the pruning step into the registration_ransac_based_on_feature_matching. The most important hyperparameter of this function is RANSACConvergenceCriteria in which we define the maximum number of RANSAC iterations and the confidence probability. The larger these two numbers are, the more accurate the result is, but also the more time the algorithm takes.

def execute_global_registration(source_down, target_down, source_fpfh,
target_fpfh, voxel_size):
distance_threshold = voxel_size * 1.5
print(":: RANSAC registration on downsampled point clouds.")
print(" Since the downsampling voxel size is %.3f," % voxel_size)
print(" we use a liberal distance threshold %.3f." % distance_threshold)
result = o3d.pipelines.registration.registration_ransac_based_on_feature_matching(
source_down, target_down, source_fpfh, target_fpfh, True,
distance_threshold,
o3d.pipelines.registration.TransformationEstimationPointToPoint(False),
3, [
o3d.pipelines.registration.CorrespondenceCheckerBasedOnEdgeLength(
0.9),
o3d.pipelines.registration.CorrespondenceCheckerBasedOnDistance(
distance_threshold)
], o3d.pipelines.registration.RANSACConvergenceCriteria(100000, 0.999))
return result
result_ransac = execute_global_registration(source_down, target_down,
source_fpfh, target_fpfh,
voxel_size)
print(result_ransac)
draw_registration_result(source_down, target_down, result_ransac.transformation)
Result of RANSAC with Pruning & FPFH

To access the Global Registration Transform to use it in Local Registration

To access the transformation for the global registration modify the execute_global_registration function to print the transformation of the result.

def execute_global_registration(source_down, target_down, source_fpfh,
target_fpfh, voxel_size):
distance_threshold = voxel_size * 1.5
print(":: RANSAC registration on downsampled point clouds.")
print(" Since the downsampling voxel size is %.3f," % voxel_size)
print(" we use a liberal distance threshold %.3f." % distance_threshold)
result = o3d.pipelines.registration.registration_ransac_based_on_feature_matching(
source_down, target_down, source_fpfh, target_fpfh, True,
distance_threshold,
o3d.pipelines.registration.TransformationEstimationPointToPoint(False),
3, [
o3d.pipelines.registration.CorrespondenceCheckerBasedOnEdgeLength(
0.9),
o3d.pipelines.registration.CorrespondenceCheckerBasedOnDistance(
distance_threshold)
], o3d.pipelines.registration.RANSACConvergenceCriteria(100000, 0.999))
print(result.transformation)
return result

We get this transformation which which we can use as initial transformation for local registration!

Thank You!

So that concludes our medium article. This tutorial provided a concise overview of global point cloud registration, starting with the intuition behind using such methods, to applying what we’ve learned so far through code. If you found this article or the whole series useful, don’t forget to clap and follow!

--

--

Amnah Ebrahim

Electronics engineer passionate about electronics, machine learning, autonomous robotics, and natural language processing!