Detect Low Obstacles using Tilted 2D Lidar
In the robotics world, we normally use 2D lidar or 3D lidar to detect the obstacles as the perception sensors. Although using the camera with the deep learning model develops very fast in recent years and it’s able to detect different objects, lidar is more convenient and “simpler” to integrate into robotics. There are a lot of mature lidar obstacle detection algorithms in the ROS system (Robotic Operation System). Normally when using lidar, you don’t need to write anything instead you just need to config the YAML file and all things magically work.
When using lidars, 3D lidar is able to return the 3D data points and generate the 3D map of the world such that robots can distinguish objects, such as walls, desks, or cans. But the “shortage” of 3D lidar is it’s too expensive. When we do our personal projects or in a startup, the budget may be limited such that we cannot afford a 3D lidar. Instead, most of us will choose the 2D lidar. But 2D lidar has limited ability to recognize different objects especially when the object size is small. 2D lidar barely finds the low obstacles and some robotics are using lower installed sonars to do this kind of perception. But still, sonar has its limitation as well, for instance, it may be too sensitive to detect an object which may just a noise.
After doing research on “how to use 2D lidar to detect low obstacles”, I found that there are two papers was doing it:
- Curb-Intersection Feature Based Monte Carlo Localization on Urban Roads
- Detection and Classification of Obstacles Using a 2D LiDAR Sensor [PDF] Referenced in this article
In this article, I will quickly go through how I used tilted 2D lidar on our robot .
Software: Make sure the ROS (ROS1) installed on your robot controlling PC.
- 2D lidar.
- Installed lidar at height 0.5m to the ground.
- Tilted lidar with 10 degrees downwards.
1. Keep the useful lidar data.
Since we are pointing lidar down to the ground, the useful data for us is the intersection parts between the lidar plane and the ground plane. We will define a
HORIZONTAL_ANGLE_RANGE value to indicate the useful data range.
2. Convert lidar result in the movebase coordinate system
Lidar only returns the value measurements and the included angle between each measurement. But when we try to recognize a lidar point to be an obstacle, we will use the “height” of the object to determine whether it’s an obstacle. And this “height” is related to the movebase. SO, we need to convert the points data from the lidar coordinate system to the movebase coordinate system.
3. Line detection
It’s hard to tell whether points are obstacles since the lidar may raise some noise occasionally. We will convert points into segmented lines and then use them later to determine which parts are obstacles.
Since at step2 we have converted points data from 2D lidar coordinate system to 3D movebase coordinate system, points have their “height” on the z-axis. A threshold
Z_DELTA (δ) around the first point of the line will determine whether a point belongs to the line if the distance on the z-axis from point to the first point is smaller than
Z_DELTA (δ). Do this because it has more chances to be an obstacle if the point has a much higher height.
Besides, we will define a max distance
Y_MAX_DIST (d) between the first point to point. If the distance between the first point to point is larger than it, the point will belong to another line. Of course, it’s possible that a small group of points form a line, but the line is too small which won’t be an obstacle or we can ignore that size of the obstacle. We define
MIN_LINE_PTS_NUM to control this min points number in a line.
4. Refine lines
Because of the
Y_MAX_DIST (d), all points are assigned into a line segmentation. It’s obvious that some of the lines can be merged into a long line since most of them are the ground when movebase is running. We will calculate the slope of the line between the first point and the last point in the line group. If two adjacency lines are similar, the difference of the slopes is smaller than
REFINE_DELTA, then we will merge both as a long line.
Now we have segmented our points into different “classes”.
Based on the segmented points in the former 4 steps, we will use the mean height value of each group as group height, and use this group height to compare our threshold
Z_OBS_EPS (obstacle epsilon threshold height on the z-axis). If the group height is larger than it, the group points are an obstacle and we will publish the PointCloud to costmap.
Sorry that I cannot show code here because of the legal reason. Please implement it by your-self, it’s simple.
When I tested the code on Gazebo, it shows very positive results. I put 4 cans on the ground at different distances to the robot. The lidar points heights on the cans are around 3cm ~8cm. All 4 cans are successfully recognized as obstacles which show on RVIZ.
Make Code Faster
If you meet more points and want to speed up your code, there are two options:
- Change it to C++ version
Use 2D lidar to detect low obstacles is not easy since lidar only tells us plane information, but it’s possible to find them if we tilt them. For more testing results, please read the above two papers which I linked above.
The algorithm is not perfect, if there is a ramp or large noise from lidar, the algorithm may generate a false alarm. If you don’t have the budget for 3D lidar or a deep camera, this still be a good way for you to avoid most of the obstacles on the ground even the “obstacle” is a pothole.
The tilted lidar won’t be able to receive reflected laser data if the ground is the glazing ground. If this happens, the algorithm listed here won’t work. This is the most painful part when we are working with hardware. To solve this problem, we have to buy better lidar, such as Hokuyo UBG-04LX-F01 with algorithm likes
urg_node to reduce the noise in the data.