Point Clouds and mergers

Ujjwal Saxena
5 min readApr 4, 2018

--

A point cloud is a set of data points in space. The sensors that generate point cloud, measure a large number of points on the external surfaces of objects around them. Point clouds are used for many purposes, like creating 3D CAD models, formetrology and quality inspection, and for a multitude of visualization, animation, rendering and mass customization applications.

“The sensors that generate point cloud” — what is this supposed to mean? Are there multiple sensors that generate a Point Cloud ?

Yes !! LIDARs are not the only ones. IR sensors, RADARS, SONARs, RGB-D Sensors, along with many others generate a Point cloud.

But how to visualize a point cloud ?

To get this, is easy. A point in xyz coordinate system is represented by 3 values and is the easiest entity to work with. You don’t have to worry about its shape, rotation, length etc. Only position and color are things that matter for computation. Imagine a set of such xyz values bunched together for each point.

image source: geninfo solutions

Using individual, unrelated points is a key to point clouds usefulness, because points are objects that are easiest to handle large amount of. Also for complete newbies here, I’ll mention that each time an EM wave(be it from a RADAR or LIDAR) strikes an obstacles and reflects back it generates a single point in space relative the point of sensor source. However it’s worth mentioning that the point cloud shown above may not the result of a single scan. This is because all EM waves travel straight and anything lying in the shadow of an obstacle goes undetected.

image source: Larry Lisky’s wordpress blog

So do I mean a car with a LIDAR mounted on it, in the middle of dense forest(I mean really dense) on a sharp blind turn cannot detect a truck coming from the other side ? Well you’re indeed a clever reader to ask me that. I never believed the forests to be safe anyways. Just the same as hills.

Is there a way to perceive an object from all sides ? Actually for a LIDAR mounted on a Car, there is no way. Otherwise, there is. And that is to simply get a point cloud generated from various perspectives. I mean if I scan a building from front and side, I have two point clouds now. However to merge them is a little tricky and an area of research too. These techniques are known as Registration Techniques for Aligning 3D Point Clouds

There are various ways for alignment, I’ll try to share the thought behind some.

  1. In the first technique we assume the laser source to be a point and log its location along with the direction. Then we generate a point cloud. We then move to another location and again assume the laser source to be a point and log its location along with the direction. We try to direction such that they intersect this time. They might not intersect on z axis but atleast do so on x-y plain. We then calculate the angle they make with each other and the difference between the closest points on the laser- This will be the translation. Once we know this we know how much to rotate the second point cloud and how much to translate it on z axis to merge it with the previous one.
  2. Two scans are performed from two sides of an object with some overlapping portion. A human selects some points on the two point clouds that are common and represent the same point on actual object. Once these common points are identified, it’s easy calculation to merge them. However this traditional approach is difficult if multiple points clouds need to be merged like in a video.
  3. Iterative closest point Algorithm is another popular way to merge point clouds. In the Iterative Closest Point or, in some sources, the Iterative Corresponding Point, one point cloud (vertex cloud), the reference, or target, is kept fixed, while the other one, the source, is transformed to best match the reference. The algorithm iteratively revises the transformation (combination of translation and rotation) needed to minimize an error metric, usually a distance from the source to the reference point cloud, such as the sum of squared differences between the coordinates of the matched pairs. However an initial guess of the rigid body transformation is required. The points that should actually overlap in both point clouds should contribute a zero to the error, as their distance from each other should be zero.
  4. In yet another way we identify the surface normal vectors for each point and then the vectors are collected at a common origin. This makes a sphere of vectors. All clusters in these spheres represent some shapes in actual world. We find the rotational difference between both and combine them. Thus we know the rotational difference and how much a point cloud needs to be rotated for merger. This technique was developed by researchers in the Chronoptics group at the University of Waikato.
image source: Youtube

5. There are also various online platforms that allow the alignment of point clouds like CloudCompare, pointclouds etc.

This can be really helpful for real world simulator designing but as I said earlier this is in no way directly helpful for a car with a mounted LIDAR or RADAR and it will still be turning on that blind forest turn with the same faith as before. Atleast for now.

good reads :

  1. http://www.eejournal.ktu.lt/index.php/elt/article/viewFile/616/641
  2. http://www.3deling.com/whta-is-a-point-cloud/
  3. https://www.sciencedirect.com/science/article/pii/S1877705817330278

For all other articles by me, please visit: https://erujjwalsaxena.wordpress.com/

--

--

Ujjwal Saxena

A learner in AV development, DNNs and computer perception. I worked at Infosys earlier and now at Nvidia working as test dev for verifying AV features