Merging data from multiple LIDAR(s) in ROS

Amrit Gupta
7 min readMay 3, 2020

--

LIDAR

Light radars or LIDARs are used in robotics, drone engineering, IoT etc. for a wide range of applications. A typical lidar uses ultraviolet, visible, or near infrared light to image objects. It can target a wide range of materials, including non-metallic objects, rocks, rain, chemical compounds, aerosols, clouds and even single molecules. They are used extensively in building autonomous systems such navigation, detection, task actuation etc.

Introduction

Unfortunately a LIDAR is an expensive piece of tech. For example, if we are designing a robot for autonomous traversal we would need a 360 degree view. This would require a LIDAR of the same specification for easy integration and usage with the underlying system. A 2D LIDAR which has a 360 view costs around a $100, 3D LIDARs could go as high as $300–$400. For a robot usually a high precision LIDAR or a series of LIDARs are armed around its body. In the second case we can also use a narrower range for each individual sensor.

This article outlines a simple method to merge multiple constituent LIDARs and process their data as a single source of data. This method could prove useful in the following cases:

  • Eliminate blind spots from LIDARs
  • Facilitate object detection and recognition
  • Reduce costs by utilising pre-existing LIDAR ranges

The software stack used here is C++ . All the operations, calls and services are being run on a Robot Operating System(ROS) environment. For better understanding of the following content it is recommended to have some prerequisite knowledge about ROS, publishers, subscribers, laser scans, point clouds etc. In any case, a brief description of each is given and the algorithm itself is explained step by step.

We will take up a model problem as an example for better explanation.

LIDAR view at a restricted angle

Problem

A robot is equipped with 2 LIDARs , one on either sides of its body. The range of these LIDARs are at some angle lesser than 360 degrees. This arrangement is helpful in detecting objects and obstacles around the robot and act accordingly. For our case we define a polygon of a an abstract fixed shape as shown in the figure as “Objects”. These polygons are detected if a particular LIDAR has the whole object in its range. This arrangement can easily detect and process the presence of objects in case of Object 1 and 2 in the above case. But Object 3 is at a blind spot. None of the LIDARs can detect the object in this case and it may lead to a collision. A simple solution to this problem would be to introduce a 3rd LIDAR in the setting at the front of the robot. This might prove to be sufficient in this case but as our system scales bigger, we might have more blind spots and we would want to minimize the number of LIDARs used.

Software Stack

Robot Operating System

Building and managing a robot can be a difficult task. As it demands management of a large number of hardware and software equipments, services, processes. The Robot Operating System (ROS) is a flexible framework for writing robot software. It is a collection of tools, libraries, and conventions that aim to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms. To learn more about ROS visit the official ROS page.

Publishers & Subscribers

Publishers and subscribers are primary entities of the ROS environment. Generally, all sensors linked to the robot publish data on a particular topic. A topic in this case is the name of a particular stream of data. These topics are subscribed to using subscribers which in turn use data from various sensors to process and perform further actions. For tutorials on how to set up your own publisher/subscriber visit here.

Laser Scan

A LIDAR generally gives out a laser scan data. The laser scan data is published onto a ROS topic with the data type as sensor_msgs/LaserScan. This data describes the range and intensity of the lasers from a particular LIDAR sensor. The laser scan message has the following standard format:

For more information visit here .

PointCloud

Visualization of a PointCloud

A point cloud is a set of data points in space. Point clouds are generally produced by 3D scanners, which measure many points on the external surfaces of objects around them.The PointCloud Library(PCL) is a large-scale open project for pointcloud processing. To know more about PCL visit https://pointcloudlibrary.github.io/documentation. For our case we shall stay restricted to using pointclouds in the ROS environment for the processing of the laser scan data. To know more about PCL in ROS visit here.

The concepts mentioned above are the main building blocks of any LIDAR based projects especially when dealing with navigation.

Solution

In the following part of the article we are going to explore the process of merging the laser scan data from Lidars 1 and 2 of our model robot. We try to derive the merged scan data which would be a combination of ranges and intensities of the constituent sensors. The merged laser scan can then be processed upon as a single LIDAR sensor data which in our case would be a 360 degrees view.

Main

Let’s start off with our main function:

So we follow the standard protocol for creating a ROS node. We call it “merger”, and initialise the node handle n. Our resultant merged data is going to be in the laser scan and pointcloud formats, hence we setup the required publishers. We import the parameters for the merged laser scan data to avoid any static values in our code. In our case we want a 360 degrees view hence the minimum and maximum angles would be set to -180 and 180, the ranges would be set to the sum of total number of rays from both the LIDARs and the frame_id would be the frame we would like to transform the frame of our laser scan/pointcloud data.We then call the function to concatenate the laser scan with point cloud.

Concatenate with pointcloud

We initiate the ROS loop with the condition that there are no errors. We make use of boost pointers to receive the data from lidars 1 and 2. This is done as the process of subscribing to any topic is processed using a callback in ROS. In our case though we would want to receive data from both pointclouds in a synchronized fashion for proper merging. After receiving the pointcloud data from each lidar we merge it using the pcl function concatenatePointCloud. In the next line we resolve the name conflict of the intensity field which arises when we deal with pointclouds from the two libraries namely sensor_msgs and PCL. We then publish the merged pointcloud data using the cloud publisher. We convert the pointcloud into laser scan to get the merged laser scan. The pointcoud_to_laserscan function will be further explained in the next section. Finally we publish the merged laser scan data using the scan publisher.

PointCloud to Laser scan

We initialise our output variable with the sensor_msgs::LaserScan format. We fill the header with the various parameters we import in the main function. The next step is to allocate space for the merged intensity and range values.

After allocation we start populating the range and intensity fields by iterating through the merged point cloud. Point cloud data is 3D data and the laserscan is a 2D array of ranges and intensities. Hence we project our points onto a 2D plane to convert it into a laser scan format. We also need to check for angle ranges and intensity ranges for which we apply angle and intensity filters. Finally we return the merged laser scan data at the desired frame.

Finally…

Visualization with a 3-D point cloud data
Merged Lidar has a 360 degree view depicted in green

The final result would be a data stream on two topics carrying the merged laser scan and point cloud data respectively. The merged data has an angle range of 360 degrees or lesser if specified during the import of parameters. For the above problem, the blind-spot and overlap of LIDAR scans is resolved as we convert the 3D point cloud back into a laserscan.

You can find the source code here.

Merging LIDAR scans can be of great benefit to robotics engineers and students as it helps in reducing cost, reusability of source code and high scalability. As a robotics enthusiast, I could not find a lot of material highlighting this method and hope this article will be useful to enthusiasts and professionals alike. I have derived references from the paper published as ira_laser_tools by ira labs here.

--

--