Implementing ORB-SLAM on Ubuntu 18.04 & ROS Melodic

Mohammad Hamdaan
6 min readOct 11, 2020

--

Ignore the stuttered recording. My processor wasn’t able to handle too many applications at once.

In recent years, the field of robotics has undergone huge development in many areas. One such domain that has revolutionized the way machines interact with unknown environments is SLAM. Simultaneous Localization And Mapping (SLAM) is a technique that allows an agent to reconstruct (mapping) its environment which keeping track of its position (localization) as it moves around. SLAM algorithms map the environment in 2-Dimensions. But in a real-world situation, 2-D mapping is not sufficient for and a robot that has 6-DOF.

Visual-SLAM algorithms can help to reconstruct the surroundings in 3-Dimensions. There have been many proposed V-SLAM algorithms and one of them that works in real-time without the need for high computational resources is the ORB-SLAM algorithm.

ORB-SLAM for a monocular camera was first proposed in: Raúl Mur-Artal, J. M. M. Montiel and Juan D. Tardós. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147–1163, 2015.

The ORB-SLAM is key-frame based and works by detecting features in an image frame. It also allows you to sparsely reconstruct the surroundings. In this tutorial, you will be learning how you can implement this amazing algorithm for your robotic applications.

The source code for the ROS package can be found on this GitHub page.

Camera calibration

The orb_slam_2_ros package supports the use of Intel RealSense, and Mynteye S cameras. It also supports various types such as Monocular, Stereo, and RGB-D. The intrinsic and extrinsic parameters of the above-mentioned cameras are pre-coded into their respective launch files. In my case, I used a normal monocular iBall WebCam. Therefore, I had to manually calibrate the camera.

Step 1

Plugin your USB WebCam and open up a new terminal window and enter the following:

$ sudo apt install ros-melodic-usb-cam
$ roslaunch usb_cam usb_cam-test.launch

If everything works fine, you should now see a new window appear, displaying the visual feed from your camera. If you get an error message try changing the /dev/video0 to /dev/video1 or any other number in the launch file.

Step 2

The camera_calibration module will already be installed for Melodic. To make sure all the dependencies for the package have been installed, run:

$ rosdep install camera_calibration

Once all the dependencies have been installed, you run can the camera_calibration node by giving in the required parameters. To know about each parameter, see this page. (You’ll also need a checkerboard to perform the calibration).

$ rosrun camera_calibration cameracalibrator.py --size 7x7 --square 0.045 image:=/camera/image_raw camera:=/camera

Once the node is up and running, position the camera in front of the board and start moving the board sideways, then vertically, then rotate it in such a way that the X, Y & Skew lines are sufficiently filled. And when they are, the calibration button lights up (which means you have sufficient data to begin the calibration process) and you can click on it.

The calibration process takes some time and then the calibration results get printed on the terminal something like this:

Copy the results and paste it in a text editor and save the file with a .ost extension.

Step 3:

Now, we need to convert the .ost file to a .yaml file. To do that, enter the following in the terminal:

$ rosrun  camera_calibration_parsers convert  <filename>.ost <filename>.yaml 

Once that is over, a new .yaml file with the specified filename will be created.

Step 4:

When you run the usb_cam node, it publishes two important topics that will be subscribed by your orb_slam2_ros node. One is the /camera/image_raw and /camera/camera_info. The latter is the topic that sends your camera parameters to the orb_slam2_ros node. Therefore, you need to make your usb_cam node to publish your .yaml file parameters to that topic. To do so, enter the following commands in your terminal:

$ roscd usb_cam
$ cd launch
$ sudo nano usb_cam-test.launch

The nano text editor will open up your launch file. Enter the highlighted line of code as shown in the image below: <param name="camera_info_url" value="file:///home/<your_username>/<your_file_address>/<your_filename>.yaml/>

PS: Change the node name to ‘camera’.

Save the file and close it. Your camera and its parameters are successfully setup.

Setting up your orb_slam2_ros node

The next steps are quite simple. cd into your catkin workspace and enter the following commands:

$ cd src
$ git clone https://github.com/appliedAI-Initiative/orb_slam_2_ros.git
$ cd ..
$ catkin_make
$ source devel/setup.bash

Now create a new .launch file in your orb_slam_2_ros/ros/launch directory and paste the following code in it and save it.

<launch>
<node name="orb_slam2_mono" pkg="orb_slam2_ros"
type="orb_slam2_ros_mono" output="screen">
<param name="publish_pointcloud" type="bool" value="true" />
<param name="publish_pose" type="bool" value="true" />
<param name="localize_only" type="bool" value="false" />
<param name="reset_map" type="bool" value="true" />
<!-- static parameters -->
<param name="load_map" type="bool" value="false" />
<param name="map_file" type="string" value="map.bin" />
<param name="voc_file" type="string" value="$(find orb_slam2_ros)/orb_slam2/Vocabulary/ORBvoc.txt" />
<param name="pointcloud_frame_id" type="string" value="map" />
<param name="camera_frame_id" type="string" value="camera_link" />
<param name="min_num_kf_in_map" type="int" value="5" />
<!-- ORB parameters -->
<param name="/ORBextractor/nFeatures" type="int" value="2000" />
<param name="/ORBextractor/scaleFactor" type="double" value="1.2" />
<param name="/ORBextractor/nLevels" type="int" value="8" />
<param name="/ORBextractor/iniThFAST" type="int" value="20" />
<param name="/ORBextractor/minThFAST" type="int" value="7" />
<!-- Camera parameters -->
<!-- Camera frames per second -->
<param name="camera_fps" type="int" value="30" />
<!-- Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale) -->
<param name="camera_rgb_encoding" type="bool" value="true" />
<!--If the node should wait for a camera_info topic to take the camera calibration data-->
<param name="load_calibration_from_cam" type="bool" value="true" />
</node>
</launch>

Now run catkin_make again in your catkin workspace directory.

Creating a node to display images

This is an important step because, when your orb_slam2_ros node runs, it will publish the live image from the camera that contains the currently found key points along with a status text. This will be published on the /debug_image topic. We cannot directly view those images, therefore we will have to create a ros node that can display those images to us.

Create a new python file inside a package and paste the following code into it. Make sure that the package has the cv_bridge dependency. If not then, you must add that dependency to your ROS package.

#!/usr/bin/env pythonimport rospy
import cv2
from cv_bridge import CvBridge
from sensor_msgs.msg import Image
bridge = CvBridge()def callback(data):
frame = bridge.imgmsg_to_cv2(data, "bgr8")
cv2.imshow('Video Feed', frame)
cv2.waitKey(1)
rospy.loginfo('Image feed received!')
def listener():
rospy.init_node('vid_rec')
#first parameter is the topic you want to subcribe sensor_msgs/Image from
rospy.Subscriber('/orb_slam2_mono/debug_image', Image, callback)
rospy.spin()
if __name__ == '__main__':
listener()

Save the file and close it. Don’t forget to make your python script executable by the running the following command:

$ chmod +x <your_python_script_name>.py

Now run catkin_make in your catkin workspace directory and then source it. You are almost done here!

Running it all together

Make sure your camera is connected to your PC. Run each of the following commands in new terminals.

$ roscore$ roslaunch usb_cam usb_cam-test.launch$ roslaunch orb_slam2_ros <your_launch_filename_you_just_created>.launch$ rosrun <your_package_name> <your_python_script_name>.py$ rviz

All of these must run without any errors. If there are any errors, then run rqt_graph to see if the nodes are communicating with each other. When the RVIZ window opens up, you can add tf and PointCloud2. You must also specify the correct topic name (/map_points) for RVIZ to subscribe to the PointCloud2 message from.

Once all of that has been set up, you can now see the point clouds appearing and the TF frame of the camera link move in your RVIZ window. Feel free to explore through the GitHub page which contains the source code to use all other functionality of this amazing package. Thanks to the authors (Raul Mur-Artal, Juan D. Tardos, J. M. M. Montiel and Dorian Galvez-Lopez) who developed the orb_slam2_ros package. Now, it seems like you are all so ready to use this package for your next ROS package.

If you do have any questions, doubts, or problems that you’re facing, please feel free to write in the comment section or contact me personally. Also, if you find any difficulty in following this written tutorial, please do let me know. Thanks and bye! Until next time…

--

--

Mohammad Hamdaan

A 21-year old who’s passionate about robotics, and poetry. Thanks for stopping by!