Gentle Introduction to Preprocessing Point Clouds-pt. 1

Amnah Ebrahim
5 min readMar 24, 2023

--

This tutorial is in continuation to the following articles:

In this article we will be looking at different preprocessing techniques such as:

  • Downsampling through Voxel Downsampling
  • Vertex Normalisation
  • Object Removal through Cropping and Calculating distances in Point Clouds

Preprocessing Point Clouds using Open3D:

Point Cloud Data (PCD) are made up of a 3D coordinate system of surfaces, that aims to describe the real world around us. However, this spatial data remains unstructured at times and contains no semantic information. Thus, it’s generally very useful to structure point clouds into higher-level representation through preprocessing techniques.

As we’ve mentioned in our previous article, Getting Started with Lidar, preprocessing can entail:

  • Denoising
  • Downsampling
  • Ground/Object Removal
  • Segmentation

In this tutorial, we will be looking at some of the preprocessing techniques available in Open3D.

Preprocessing with Downsampling

Voxel Downsampling

One of the higher-level representations mentioned above are voxels.

“ Voxels, similar to pixels in an image, are abstracted 3D units with pre-defined volumes, positions, and attributes, which can be used to structurally represent discrete points in a topologically explicit and information-rich manner.”.- Source: click here

Downsampling 3D point clouds into voxels reduces the amount of data while still preserving the overall structure of the point clouds. A voxel refers to a small cubic volume in 3D space that groups together nearby points in the point cloud. The size of the voxels used for downsampling can be adjusted based on the desired level of detail and the specific requirements of the application.

Let’s take a look at this point cloud and downsample it.

print("Downsample the point cloud with a voxel of 0.02")
downpcd = pcd.voxel_down_sample(voxel_size=0.02)
o3d.visualization.draw_plotly([downpcd],
zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])

Let’s downsample the point cloud to voxels of size 0.02:

Point cloud downsampled into voxel size of 0.02

As seen in the figure, the image has a lot less points displayed, as points are grouped together into a voxel size of 0.02. The larger the voxel size, the less points will be displayed as more will be “bucketed” into one voxel.

Vertex normal estimation

Vertex normal estimation is one technique used to process PCD, particularly after voxel downsampling. It aids in the calculation of surface normals for each point in the PCD or voxel after voxel downsampling, which can provide useful information about the orientation and curvature of the surface at that point.

This can be useful for many applications, such as visualisation, surface analysisfor curvature and orientation and registration. Surface normals estimated using vertex normal estimation can help align or register multiple PCDs together, by matching the orientation of their surfaces at corresponding points.

As all point cloud sensors, including Lidar, have an inherent noise during its measurement process, pre-processing PCD is important and critical. Therefore, an important step usually taken before finding the vertex normal, is preprocessing pcd through denoising techniques or downsampling like voxel downsampling.

Preprocessing with Object Removal

Cropping Point Clouds:

Let’s take a look at one important example in the documentation of Open3D. First let’s download this PCD we looked at during the last tutorial:

And let’s use the demo polygon available in Open3D to crop the chair from the image. First using the DemoCropPointCloud() we download the file paths for the demo point cloud and the polygon volume used for cropping.

print("Load a polygon volume and use it to crop the original point cloud")
demo_crop_data = o3d.data.DemoCropPointCloud()
pcd = o3d.io.read_point_cloud(demo_crop_data.point_cloud_path)
vol = o3d.visualization.read_selection_polygon_volume(demo_crop_data.cropped_json_path)
chair = vol.crop_point_cloud(pcd)
o3d.visualization.draw_plotly([chair],
zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])

Next, the read_point_cloud() function reads in the point cloud data from the file path provided by demo_crop_data.point_cloud_path. The read_selection_polygon_volume() function from the open3d.visualization module is called to read in the polygon volume data from the file path provided by demo_crop_data.cropped_json_path. The crop_point_cloud() function of the polygon volume object is called to crop the original point cloud based on the volume defined by the polygon. The resulting cropped point cloud is assigned to the variable chair. Finally, the draw_plotly() function displays the cropped point cloud.

Cropped Chair

Now what if we want to do the reverse? Rather than cropping the chair out of the image, we remove the chair from the image??

To do so, we use the “compute_point_cloud_distance” method. This method computes the distance from a source point cloud to a target point cloud. To remove the chair from the pcd, let’s consider these two objects the original pcd, and extracted chair as another pcd.

First, we calculate the distance between each point in pcd and the closest point in the chair point cloud. Then we convert it into numpy array to perform numpy operations and manipulations.

Then through np.where(dists > 0.01)[0] we remove any index 0.01 units or closer to the chair and only save the indices that are greater than 0.01 or further away from the chair. The select_by_index() function of the pcd object is then used to select only the points in pcd whose indices are in ind, therefore efficiently removing the points that are too close to the chairsaving the results in a point cloud object pcd_without_chair.

distance= pcd.compute_point_cloud_distance(chair)
dists = np.asarray(distance)
ind = np.where(dists > 0.01)[0]
pcd_without_chair = pcd.select_by_index(ind)
o3d.visualization.draw_plotly([pcd_without_chair],
zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])
PCD without the chair

In the next article..

We will be looking at the different ways we can use open3D to segment point clouds using Convex Hulls and DBSCAN in Gentle Introduction to Preprocessing Point Clouds-pt. 2.

--

--

Amnah Ebrahim

Electronics engineer passionate about electronics, machine learning, autonomous robotics, and natural language processing!