Introduction
In prior work (https://bit.ly/3imFoDS) the focus was on the basics of ROS. This included topics, services and actions. The final project involved getting the robot to move around a closed track given a number of specifications. The approach developed to accomplish that task was rudimentary. A subscriber to the laser range finder enabled access to a set of distance values that spanned an ~180° arc in front of the robot. These distances values were then filtered and used in decision logic to determine the next course of action. Once a decision had been made on what to do, a set of velocity commands were sent out to a pre-defined publisher that was able to take the determined velocity values and use them to drive end-effectors responsible for translation and rotation of the robot.
This approach is viable, but it can grow cumbersome as it relies heavily on decision logic to deal with environmental contingencies that preclude the robot from successfully completing its goal. As the environment becomes more complex, the ability to code for every contingency doesn’t scale well and can become intractable. One way to move past some of the issues encountered in this naive approach is to use the simultaneous localization and mapping (SLAM) methodology.
ROS encapsulates the SLAM approach within its navigation stack. The navigation stack is a set of nodes and algorithms that collectively enable the robot to move from position to position while dealing with environmental contingencies in a more scalable manner. There are likely variants, but the standard stack consists of the mapping, localization and movement nodes as well as sensor nodes and transforms. The basic setup can be visualized as follows:
There is much to be said about the functional blocks within this navigation stack. The main focus for this course was geared more toward understanding enough how to configure the stack in order to achieve desired objectives as opposed to delving into algorithmic details. It was still important to understand the basic functionality of a particular component, but once that functionality was understood, most of the focus was shifted to shaping it toward the current context in which we were interested.
Of the functional blocks that are present three (slam_gmapping, amcl, move_base) possess parameters to be configured, with the total number of parameters each of these nodes possess being around 50 at minimum. Tuning strategies have been put forward for dealing with such complexity (https://bit.ly/3Vi54QN). ROS provides a standard build that can be used to ameliorate this process and for a given issue that may arise, not all parameters may be heavily dependent on each other, making it possible in those cases to narrow in on a few that are task relevant.
The main objective for this project was to get the robot to move to a number of key locations on a closed track while avoiding obstacles using the SLAM methodology. The workflow to accomplish this task consisted of first building a map, then identifying the key points using the localization module, and finally tuning the path planning parameters to effectively navigate to the desired objectives. The end products of these various modules were then integrated into a navigation module that was designed to allow the user to specify which behavior they would like the robot to perform. Upon entering the behavior the client would first query a server whose functionality was to retrieve the position and orientation coordinates of the desired goal. These coordinates were then sent off to the move_base node allowing the robot to begin planning a path to reach the desired goal. Note as well that an additional “lap” behavior was included to better compare the results of this approach with that implement in the ROS basics course.
The main motivation with these projects remains exploratory. Getting a robot to effectively navigate in more practical environments is the goal. Continuing to develop an understanding of the essential elements that enable the accomplishment of that goal and their various shortcomings will thus be necessary. Focusing on developing a solution in the ROS basics course offered a starting point to begin to think about this problem. Moving on to one of the more commonly used methodologies in the field it was hypothesized would begin to make more plain how to better accomplish that goal and also highlight many of the key challenges that still remain.
Methods
The Construct (https://app.theconstructsim.com/#/) offers an online compute and code environment. This environment consists of web shells, an IDE, Jupyter notebooks, and Gazebo, a ROS specific simulation environment.
This project consisted of four major sections: map creation, robot localization, path planning development and navigation integration.
Section I: Create a map of the environment
Mapping required the slam_gmapping, teleoperation and map_server packages.
The slam_gmapping package was accessed via a launch file that consisted of a means to start the ROS native slam_gmapping node and a collection of tunable parameters.
Once the gmapping node had been started and suitable parameters chosen, we needed a means to locomote the robot in order to build the map. ROS has a native package, turtlebot_teleop, that allows for manual navigation.
After the desired terrain had been traversed the last operation to perform was to save the map. Map saving occurred via the ROS native map_server node. In order to save a map this node has a specific command that can be executed via a command line or a launch file.
Note that the standard means to interface with the map_saver file was modified slightly for convenience. To avoid clunky repetition a button for map saving was added to the teleop program enabling the end user to automatically save the map without needing to execute a new terminal command as in Fig. 6 or start a new launch file. See Fig. 4 for more detail.
Section II: Localize the robot
Localization required the amcl, teleoperation and map_server packages. It was also necessary to develop a service server to save collected data points.
The amcl package was accessed via a launch file that consisted of a means to start the ROS native amcl node and a collection of tunable parameters.
In Fig. 7 at the top note how the map_server node is first called in order to load the saved map from Section I.
Once the amcl node had been started and suitable parameters chosen, we needed a means to locomote the robot in order to identify and store points of interest. As mentioned in Section I, ROS has a native package, turtlebot_teleop, that, after being modified to some degree, was used for this purpose.
Upon reaching one of these three points, the robot needed a way to store them for later retrieval. This was done via a service server. This server would use an amcl subscriber to extract the current pose and orientation, and once called via the teleop program save the amcl information to a file for later use.
For the localization section it may also be important to set an initial position for the robot. This could be done via the RVIZ simulation environment. See Fig. 5. Toward the top there is a button, “2D Pose Estimate”. Clicking on the map after clicking that button facilitated the successful completion of the robots localization process.
Section III: Create robot path planning system
Path planning required the move_base, amcl and map_server packages.
The move_base, amcl and map_server packages were accessed via a launch file that consisted of a means to start these ROS native nodes and a collection of tunable parameters specific to the move_base node.
In this case many of the prior packages used in mapping and localization were necessary to be started simultaneously with the main path planning node, move_base. Additionally, due to the number of parameters required for the move_base node, they were placed in separate configuration files to make the tuning process more manageable.
Path planning was straightforward in that it only required the construction of a launch file. Further development required code built in Section IV before we could begin testing the end product of this section in particular as well as sections I and II. However, due to the number of relevant parameters for this section it was recommended that the user familiarize themselves with an additional package called rqt_reconfigure. This package offered a GUI that enabled the end user to manually configure the parameters in figures 12 and 13 without needing to restart the simulation.
Section IV: Create program that interacts with and integrates various components of the navigation stack
This section consisted of the construction of a service and action server.
The service server was the means via which the recordings stored in the localization section were accessed. Accessing these recordings was a preliminary step necessary in order to locomote the robot to the desired location. After the location label was typed in the command line, the server would then look through a file and retrieve the associated position and orientation coordinates.
The action server was the means via which the move_base node was activated. Once the coordinates for the desired goal were retrieved a move_base native goal object was declared. This goal object has in-built functionality that allows it to accept the coordinate parameters and an ability to interact with a function that tells the move_base node to begin path planning and execution.
For the navigation section as well as the following section it may also be important to set an initial position for the robot. This could be done via the RVIZ simulation environment. See Fig. 5. Toward the top there is a button, “2D Pose Estimate”. Clicking on the map after clicking that button facilitated the successful completion of the robots path planning process.
Section V: Testing and the Realrobot Lab
After the components for sections I-IV were put in place, the task would then be to use the web shell to start running simulations and recording results. One component of this involved launching Gazebo, which enabled us to visualize and interact with the robot in it’s environment. The remaining components were not always the same in that testing a single module such as the mapping module required its own set of launch files. Keeping track of what needed to be activated or adjusted was straightforward once the requirements for each section were understood and documented, see sections I-IV for more detail.
After testing the robot using the simulation environment, we then began testing it in the lab. To test the robot in the lab we needed to book a time. This was done via the website; once there you navigate to a specific tab called “Real Robot Lab”. After clicking through a series of steps that involved selecting the robot of interest, ROS distribution of interest and time of interest, we were ready to begin testing in the lab environment.
When the specified time for testing came, we would load up the standard compute environment, only now that the booking had been set there were two additional sections that appeared: a radio button and a robot head.
Hovering over the robot head for awhile would trigger a pop-up box that allowed us to connect to the lab via an on/off button; it also showed how much time we had left in our booking.
Once we connected to the robot in the lab, we then clicked on the radio button, and if all steps had been done correctly, a screen appeared that was split down the middle, with each side showing the robot in the lab environment from a different angle.
From here, we could begin testing with the robot similar to how we tested in the Gazebo environment. All that was required was for us to issue a number of commands in the web shell that would compile the desired programs onto the robot and begin executing them.
Results
Conclusions
In terms of coding I did find this approach to be less demanding. In attempting to move the robot from the simulation environment to the lab using the prior approach it seemed like anything that could go wrong did. In other words, if one of my conditionals had poor logic, then this would very often prevent the robot from achieving the task. This poor logic could have stemmed from a number of factors such as my using an “or” statement instead of an “and” statement, or my using a value for a threshold parameter that was sufficient in the simulation, but needed a tighter or looser bound in the lab. Additionally, I could have just failed to account for a case in the lab because it did not occur in the simulation. Generally, it was found that problems like this did not occur as much with the SLAM approach.
There was still a great deal of obscurity and a number of outstanding issues that were noted with the SLAM approach. Using this approach, in particular for this course, I did gain an appreciable amount of knowledge of the components in the navigation stack. This made it possible for me to adjust the parameters when the robot did not successfully complete its task with a reasonable amount of time and effort. It is fortunate how much we are provided from the standard build with ROS. I did note it possible to isolate a few parameters that would greatly improve performance when an issue occurred. I am left wondering, however, how long this may have taken had there not been a baseline provided by the ROS standard build for the turtlebot. This remains an outstanding issue that likely needs further investigation.
Even though I did manage to have success in getting the robot to achieve what it is I wanted it to achieve, it was clear to me that there were a number of issues that continued to occur which, due to the nature of the approach being used, I was unable to understand well enough how to fix. For example, for lab environment videos, it is possible to observe in the “corner1” setting that the robot backs up slightly before it begins to move forward toward its goal. Another example, again in the lab environment, is with the “lap” setting. Here the robot is meant to return to the “pedestrian” position before performing a single lap around the track. It is possible to note however that it substantially overshoots its mark before turning around and reaching its goal.
Many of the issues that did occur appeared to occur due either to kinematic parameters that were set too high such as the max acceleration, parameters that were related to precision being set too high such as the goal tolerance or collision avoidance parameters being set too high such as the inflation radius. As mentioned, one of the benefits with this approach seemed to be a much better translation to the lab environment or in other words, many of the issues with these parameters were noted in both environments, though there may have been some differences in certain cases. For example, the collision avoidance system in the simulation environment actually proved to be more troublesome in certain cases because the basic build of the simulation environment had an obstacle in the path of the robot that was not present in the lab. This obstacle was placed close enough to the obstacles in the center, and so if an inflation radius was used that was too high, the robot would essentially stall out due the collision system preventing the robot from finding a valid trajectory.
Even though it was possible to correct for these errors I did not note it was not possible to eliminate them to a desirable degree. Part of the reason I think may be due to a lack of intuition regarding some of the underlying algorithms. Intuition regarding these matters is something that usually takes time to develop. Additionally, I think much of what is being done to navigate the robot is still fairly simplistic. I am not too familiar with what more advanced approaches employ in their entirety, but, for example, I am aware that it is possible to improve robot localization in this approach via sensor fusion, which is something that is not implemented in the current build.
Taking these issues into consideration I still claim that the robot met its specifications for this particular project. Note as well that, although the “lap” behavior was not specifically stated in the original problem statement, it was designed to offer a comparison to the approach employed in the ROS basics course. In this case it is possible to observe that the behavior is still successful, though I was unable to figure out how to keep it within specifiable distance from the wall. I think to do this I would have to go into some of the algorithmic details and implement a penalty term or something of this nature that pushes the robot to stay closer to the wall, though maybe there are more creative solutions with the existing stack that I failed to see.
I thought this course was fairly straightforward and I enjoyed getting to do a project focused on using the SLAM methodology. I am keen to continue to developing a solution to this problem. I would like to see what occurs in more complex, real world environments as well. A potential next step is to implement sensor fusion, along with some learning algorithms. It is not clear whether or not this will enable the robot to begin development in more practical environments, however, I think they are essential elements that have yet to be explored. If there are any questions or suggestions regarding this project, let me know in the comment section on medium or via one of my alternative contacts listed below.
Contact
Linkedin: https://linkedin.com/in/myqrizzo
Mastodon: https://mstdn.social/@ekim
Github: https://github.com/myqrizzo/