ROS Navigation: SLAM Basics

Mike
14 min readDec 8, 2022

--

Introduction

In prior work (https://bit.ly/3imFoDS) the focus was on the basics of ROS. This included topics, services and actions. The final project involved getting the robot to move around a closed track given a number of specifications. The approach developed to accomplish that task was rudimentary. A subscriber to the laser range finder enabled access to a set of distance values that spanned an ~180° arc in front of the robot. These distances values were then filtered and used in decision logic to determine the next course of action. Once a decision had been made on what to do, a set of velocity commands were sent out to a pre-defined publisher that was able to take the determined velocity values and use them to drive end-effectors responsible for translation and rotation of the robot.

This approach is viable, but it can grow cumbersome as it relies heavily on decision logic to deal with environmental contingencies that preclude the robot from successfully completing its goal. As the environment becomes more complex, the ability to code for every contingency doesn’t scale well and can become intractable. One way to move past some of the issues encountered in this naive approach is to use the simultaneous localization and mapping (SLAM) methodology.

ROS encapsulates the SLAM approach within its navigation stack. The navigation stack is a set of nodes and algorithms that collectively enable the robot to move from position to position while dealing with environmental contingencies in a more scalable manner. There are likely variants, but the standard stack consists of the mapping, localization and movement nodes as well as sensor nodes and transforms. The basic setup can be visualized as follows:

Fig. 1 ROS Navigation Stack. Source: https://bit.ly/2WJB02t

There is much to be said about the functional blocks within this navigation stack. The main focus for this course was geared more toward understanding enough how to configure the stack in order to achieve desired objectives as opposed to delving into algorithmic details. It was still important to understand the basic functionality of a particular component, but once that functionality was understood, most of the focus was shifted to shaping it toward the current context in which we were interested.

Of the functional blocks that are present three (slam_gmapping, amcl, move_base) possess parameters to be configured, with the total number of parameters each of these nodes possess being around 50 at minimum. Tuning strategies have been put forward for dealing with such complexity (https://bit.ly/3Vi54QN). ROS provides a standard build that can be used to ameliorate this process and for a given issue that may arise, not all parameters may be heavily dependent on each other, making it possible in those cases to narrow in on a few that are task relevant.

The main objective for this project was to get the robot to move to a number of key locations on a closed track while avoiding obstacles using the SLAM methodology. The workflow to accomplish this task consisted of first building a map, then identifying the key points using the localization module, and finally tuning the path planning parameters to effectively navigate to the desired objectives. The end products of these various modules were then integrated into a navigation module that was designed to allow the user to specify which behavior they would like the robot to perform. Upon entering the behavior the client would first query a server whose functionality was to retrieve the position and orientation coordinates of the desired goal. These coordinates were then sent off to the move_base node allowing the robot to begin planning a path to reach the desired goal. Note as well that an additional “lap” behavior was included to better compare the results of this approach with that implement in the ROS basics course.

The main motivation with these projects remains exploratory. Getting a robot to effectively navigate in more practical environments is the goal. Continuing to develop an understanding of the essential elements that enable the accomplishment of that goal and their various shortcomings will thus be necessary. Focusing on developing a solution in the ROS basics course offered a starting point to begin to think about this problem. Moving on to one of the more commonly used methodologies in the field it was hypothesized would begin to make more plain how to better accomplish that goal and also highlight many of the key challenges that still remain.

Methods

The Construct (https://app.theconstructsim.com/#/) offers an online compute and code environment. This environment consists of web shells, an IDE, Jupyter notebooks, and Gazebo, a ROS specific simulation environment.

Fig. 2 Starting from left and moving rightward: the web shell, IDE environment, Jupyter notebook and Gazebo simulator sections of the development environment. The remaining sections were unused.

This project consisted of four major sections: map creation, robot localization, path planning development and navigation integration.

Section I: Create a map of the environment

Mapping required the slam_gmapping, teleoperation and map_server packages.

The slam_gmapping package was accessed via a launch file that consisted of a means to start the ROS native slam_gmapping node and a collection of tunable parameters.

Fig. 3 slam_gmapping launch file used throughout the project. Note while there is direct correspondence in parameters used, certain parameter values may vary depending on the build.

Once the gmapping node had been started and suitable parameters chosen, we needed a means to locomote the robot in order to build the map. ROS has a native package, turtlebot_teleop, that allows for manual navigation.

Fig. 4 Terminal output for teleop program used throughout the project. Red boxes surround areas of interest. The top leftmost red box were the keys used to move the robot (w/x forward and backward, a/d left and right, s to stop). The top rightmost box shows the keys for saving and deleting information as further specified below the middle box. The middle box indicates the linear/angular veloctiy speed changes. The bottom indicates the current linear and angular velocities.
Fig. 5 RVIZ simulation environment demonstrating robot mapping behavior.

After the desired terrain had been traversed the last operation to perform was to save the map. Map saving occurred via the ROS native map_server node. In order to save a map this node has a specific command that can be executed via a command line or a launch file.

Fig. 6 Standard map server command for map saving.

Note that the standard means to interface with the map_saver file was modified slightly for convenience. To avoid clunky repetition a button for map saving was added to the teleop program enabling the end user to automatically save the map without needing to execute a new terminal command as in Fig. 6 or start a new launch file. See Fig. 4 for more detail.

Section II: Localize the robot

Localization required the amcl, teleoperation and map_server packages. It was also necessary to develop a service server to save collected data points.

The amcl package was accessed via a launch file that consisted of a means to start the ROS native amcl node and a collection of tunable parameters.

Fig. 7 acml launch file used throughout project. Note while there is direct correspondence in parameters used, certain parameter values may vary depending on the build.

In Fig. 7 at the top note how the map_server node is first called in order to load the saved map from Section I.

Once the amcl node had been started and suitable parameters chosen, we needed a means to locomote the robot in order to identify and store points of interest. As mentioned in Section I, ROS has a native package, turtlebot_teleop, that, after being modified to some degree, was used for this purpose.

Fig. 8 Robot positioned at the three locations of interest on the map. The top left location label was “corner1”. The top right location label was “corner 2”. Bottom middle location label was “pedestrian”.

Upon reaching one of these three points, the robot needed a way to store them for later retrieval. This was done via a service server. This server would use an amcl subscriber to extract the current pose and orientation, and once called via the teleop program save the amcl information to a file for later use.

Fig. 9 Teleop program after pressing the “t” button to save a data point.
Fig. 10 Spot saver server after being called by the teleop program.

For the localization section it may also be important to set an initial position for the robot. This could be done via the RVIZ simulation environment. See Fig. 5. Toward the top there is a button, “2D Pose Estimate”. Clicking on the map after clicking that button facilitated the successful completion of the robots localization process.

Section III: Create robot path planning system

Path planning required the move_base, amcl and map_server packages.

The move_base, amcl and map_server packages were accessed via a launch file that consisted of a means to start these ROS native nodes and a collection of tunable parameters specific to the move_base node.

Fig. 11 move_base launch file used throughout project. Note

In this case many of the prior packages used in mapping and localization were necessary to be started simultaneously with the main path planning node, move_base. Additionally, due to the number of parameters required for the move_base node, they were placed in separate configuration files to make the tuning process more manageable.

Fig. 12 Costmap parameter files. On the left is the common costmap, the top right, the global costmap and bottom left, local costmap.
Fig. 13 Planner and move_base configuration files. On the top left is the dwa local planner configuration file, the top right, the move_base configuration file and the bottom the global planner configuration file.

Path planning was straightforward in that it only required the construction of a launch file. Further development required code built in Section IV before we could begin testing the end product of this section in particular as well as sections I and II. However, due to the number of relevant parameters for this section it was recommended that the user familiarize themselves with an additional package called rqt_reconfigure. This package offered a GUI that enabled the end user to manually configure the parameters in figures 12 and 13 without needing to restart the simulation.

Fig. 14 rqt_reconfigure GUI used to make path planning configuration process more manageable.

Section IV: Create program that interacts with and integrates various components of the navigation stack

This section consisted of the construction of a service and action server.

The service server was the means via which the recordings stored in the localization section were accessed. Accessing these recordings was a preliminary step necessary in order to locomote the robot to the desired location. After the location label was typed in the command line, the server would then look through a file and retrieve the associated position and orientation coordinates.

Fig. 15 Command line output for the navigation program that integrates sections I-III with a service and action server.

The action server was the means via which the move_base node was activated. Once the coordinates for the desired goal were retrieved a move_base native goal object was declared. This goal object has in-built functionality that allows it to accept the coordinate parameters and an ability to interact with a function that tells the move_base node to begin path planning and execution.

Fig. 16 Sample code demonstrating the construction of a goal object that stores positional and orientation information. This information is then sent to the move_base node via the send_goal function to begin the navigation process.

For the navigation section as well as the following section it may also be important to set an initial position for the robot. This could be done via the RVIZ simulation environment. See Fig. 5. Toward the top there is a button, “2D Pose Estimate”. Clicking on the map after clicking that button facilitated the successful completion of the robots path planning process.

Section V: Testing and the Realrobot Lab

After the components for sections I-IV were put in place, the task would then be to use the web shell to start running simulations and recording results. One component of this involved launching Gazebo, which enabled us to visualize and interact with the robot in it’s environment. The remaining components were not always the same in that testing a single module such as the mapping module required its own set of launch files. Keeping track of what needed to be activated or adjusted was straightforward once the requirements for each section were understood and documented, see sections I-IV for more detail.

Fig. 17 Command to start the Gazebo simulator. This had to be input to the web shell in order for it to be executed properly.
Fig. 18 Left part of the image shows the terminal after executing the command from Fig. 17. The right shows the Gazebo simulator after it has been initialized.

After testing the robot using the simulation environment, we then began testing it in the lab. To test the robot in the lab we needed to book a time. This was done via the website; once there you navigate to a specific tab called “Real Robot Lab”. After clicking through a series of steps that involved selecting the robot of interest, ROS distribution of interest and time of interest, we were ready to begin testing in the lab environment.

Fig. 19 Web browser after navigating to the Real Robot Lab tab on The Construct website and selecting the “book now” button that appears.

When the specified time for testing came, we would load up the standard compute environment, only now that the booking had been set there were two additional sections that appeared: a radio button and a robot head.

Fig. 20 Two new options become available in the ROS development environment after booking a live session: the radio button and robot head button.

Hovering over the robot head for awhile would trigger a pop-up box that allowed us to connect to the lab via an on/off button; it also showed how much time we had left in our booking.

Fig. 21 Box that will appear after clicking or hovering over the robot head in the development environment. Turning on and off connects or disconnects the developer to the robot in the lab.

Once we connected to the robot in the lab, we then clicked on the radio button, and if all steps had been done correctly, a screen appeared that was split down the middle, with each side showing the robot in the lab environment from a different angle.

Fig. 22Footage with the robot in the lab environment that the developer will see after connecting to the robot and pushing the radio button.

From here, we could begin testing with the robot similar to how we tested in the Gazebo environment. All that was required was for us to issue a number of commands in the web shell that would compile the desired programs onto the robot and begin executing them.

Results

Robot test environment experiment. Robot navigates the track, eventually finding the first area of interest, “corner1”.
Robot lab environment experiment. Robot navigates the track, eventually finding the first area of interest in a similar manner to the simulation environment. Note that the lab camera did not have the best quality; minimal editing has been performed in order to demonstrate results.
Robot test environment experiment. Robot navigates the track, eventually finding the second area of interest, “pedestrian”.
Robot lab environment experiment. Robot navigates the track, eventually finding the second area of interest in a similar manner to the simulation environment. Note that the lab camera did not have the best quality; minimal editing has been performed in order to demonstrate results.
Robot test environment experiment. Robot navigates the track, eventually finding the third area of interest, “corner2”.
Robot lab environment experiment. Robot navigates the track, eventually finding the third area of interest in a similar manner to the simulation environment. Note that the lab camera did not have the best quality; minimal editing has been performed in order to demonstrate results.
Robot test environment experiment. Robot navigates the track, eventually performing a single lap.
Robot lab environment experiment. Robot navigates the track, eventually performing a single lap in a similar manner to the simulation environment. Note that the lab camera did not have the best quality; minimal editing has been performed in order to demonstrate results. In this case there were a number of artifacts that appeared toward the third corner in the lap and in the video this caused the robot to jump ahead in a noticeable manner. Raw footage in all cases can be provided upon request.

Conclusions

In terms of coding I did find this approach to be less demanding. In attempting to move the robot from the simulation environment to the lab using the prior approach it seemed like anything that could go wrong did. In other words, if one of my conditionals had poor logic, then this would very often prevent the robot from achieving the task. This poor logic could have stemmed from a number of factors such as my using an “or” statement instead of an “and” statement, or my using a value for a threshold parameter that was sufficient in the simulation, but needed a tighter or looser bound in the lab. Additionally, I could have just failed to account for a case in the lab because it did not occur in the simulation. Generally, it was found that problems like this did not occur as much with the SLAM approach.

There was still a great deal of obscurity and a number of outstanding issues that were noted with the SLAM approach. Using this approach, in particular for this course, I did gain an appreciable amount of knowledge of the components in the navigation stack. This made it possible for me to adjust the parameters when the robot did not successfully complete its task with a reasonable amount of time and effort. It is fortunate how much we are provided from the standard build with ROS. I did note it possible to isolate a few parameters that would greatly improve performance when an issue occurred. I am left wondering, however, how long this may have taken had there not been a baseline provided by the ROS standard build for the turtlebot. This remains an outstanding issue that likely needs further investigation.

Even though I did manage to have success in getting the robot to achieve what it is I wanted it to achieve, it was clear to me that there were a number of issues that continued to occur which, due to the nature of the approach being used, I was unable to understand well enough how to fix. For example, for lab environment videos, it is possible to observe in the “corner1” setting that the robot backs up slightly before it begins to move forward toward its goal. Another example, again in the lab environment, is with the “lap” setting. Here the robot is meant to return to the “pedestrian” position before performing a single lap around the track. It is possible to note however that it substantially overshoots its mark before turning around and reaching its goal.

Many of the issues that did occur appeared to occur due either to kinematic parameters that were set too high such as the max acceleration, parameters that were related to precision being set too high such as the goal tolerance or collision avoidance parameters being set too high such as the inflation radius. As mentioned, one of the benefits with this approach seemed to be a much better translation to the lab environment or in other words, many of the issues with these parameters were noted in both environments, though there may have been some differences in certain cases. For example, the collision avoidance system in the simulation environment actually proved to be more troublesome in certain cases because the basic build of the simulation environment had an obstacle in the path of the robot that was not present in the lab. This obstacle was placed close enough to the obstacles in the center, and so if an inflation radius was used that was too high, the robot would essentially stall out due the collision system preventing the robot from finding a valid trajectory.

Even though it was possible to correct for these errors I did not note it was not possible to eliminate them to a desirable degree. Part of the reason I think may be due to a lack of intuition regarding some of the underlying algorithms. Intuition regarding these matters is something that usually takes time to develop. Additionally, I think much of what is being done to navigate the robot is still fairly simplistic. I am not too familiar with what more advanced approaches employ in their entirety, but, for example, I am aware that it is possible to improve robot localization in this approach via sensor fusion, which is something that is not implemented in the current build.

Taking these issues into consideration I still claim that the robot met its specifications for this particular project. Note as well that, although the “lap” behavior was not specifically stated in the original problem statement, it was designed to offer a comparison to the approach employed in the ROS basics course. In this case it is possible to observe that the behavior is still successful, though I was unable to figure out how to keep it within specifiable distance from the wall. I think to do this I would have to go into some of the algorithmic details and implement a penalty term or something of this nature that pushes the robot to stay closer to the wall, though maybe there are more creative solutions with the existing stack that I failed to see.

I thought this course was fairly straightforward and I enjoyed getting to do a project focused on using the SLAM methodology. I am keen to continue to developing a solution to this problem. I would like to see what occurs in more complex, real world environments as well. A potential next step is to implement sensor fusion, along with some learning algorithms. It is not clear whether or not this will enable the robot to begin development in more practical environments, however, I think they are essential elements that have yet to be explored. If there are any questions or suggestions regarding this project, let me know in the comment section on medium or via one of my alternative contacts listed below.

--

--