Deep Reinforcement Learning in Mobile Robot Navigation Tutorial — Part1: Installation

Reinis Cimurs
4 min readAug 30, 2022

--

Deep Reinforcement Learning (DRL) has long been speculated to be able to solve all sorts of tasks in various fields. It has a rather impressive CV ranging from keeping pendulum swings upright to being able to play video games directly from the image input. The rule of thumb is — if it has a state, an action, and a way to determine how to reward this pair, we can learn whatever problem is put in front of us. Naturally, this includes robot motion. Or specifically for our case here — wheeled mobile ground robot motion (in simulation). I think we can all admit that given sensor data, a robot should be able to learn how to determine its next move. The math is there (I will not go into details here but some good overviews of the DRL methods used here are available on openai website here and here), the development tools are there, but what about the implementation? Well, that is exactly what we will be looking over here — the implementation of twin-delayed deep deterministic policy gradient (TD3) architecture for learning mobile robot motion with Pytorch and ROS Noetic in python.

As the basis for this tutorial, I will use the GitHub repository:

Training Example

PART 1: The Installation

To simplify the tutorial description, I will assume some familiarity with the software used in the development of this repository, such as:

These are the main dependencies for the installation of this repository. Their installation should be fairly straightforward following the information in the provided links.

Assuming that all of this is set up, we can clone and install our repository. First, open a terminal and navigate to the folder where you would like to clone the repository. (Here, and everywhere else in the code, replace the <PATH_TO_FOLDER> with the absolute path to your folder)

cd <PATH_TO_FOLDER>

Clone the repository:

git clone https://github.com/reiniscimurs/DRL-robot-navigation

The repository requires compilation. To do so, navigate to the following folder and compile:

cd ~<PATH_TO_FOLDER>/DRL-robot-navigation/catkin_ws
catkin_make_isolated

To run the neural network training in ROS, some variables first need to be exported and sourced. This can be done by executing the following lines in the terminal:

export ROS_HOSTNAME=localhost
export ROS_MASTER_URI=http://localhost:11311
export ROS_PORT_SIM=11311
export GAZEBO_RESOURCE_PATH=~<PATH_TO_FOLDER>/DRL-robot-navigation/catkin_ws/src/multi_robot_scenario/launch
source ~/.bashrc
cd ~<PATH_TO_FOLDER>/DRL-robot-navigation/catkin_ws
source devel_isolated/setup.bash

(Note: These commands set up the sources in your terminal. Remember to run them, every time you open a new terminal window.)

This should be sufficient to install and source the repository in order to be able to run the neural network training. You can start the training by navigating to the respective folder and running the train_velodyne_TD3.py file by executing the following commands in the terminal:

cd ~<PATH_TO_FOLDER>/DRL-robot-navigation/TD3
python3 train_velodyne_td3.py

While training is running, you can see the progress in tensorboard visualization opening a new terminal and executing:

cd ~<PATH_TO_FOLDER>/DRL-robot-navigation/TD3
tensorboard --logdir runs

The terminal will output something similar to the following image:

Terminal output after starting tensorboard

Copy the <http://localhost:6006/> (or equivalent) and paste it into your browser of choice to see the tensorboard output. Alternatively, you can hold CTRL and simply click on <http://localhost:6006/> and the page will open automatically.

To stop the training before it has been completed, simply press CTRL+c in the terminal where the training was started. However, this will not stop all the background processes. To stop the processes running in the background, open a new terminal and execute:

killall -9 rosout roslaunch rosmaster gzserver nodelet robot_state_publisher gzclient python python3

Once training has been completed, you can test the trained neural network by executing the test script:

cd ~<PATH_TO_FOLDER>/DRL-robot-navigation/TD3
python3 test_velodyne_td3.py

This should be enough to run and test the neural network training in ROS simulation. However, for clean installs, there might be some things to take note of:

  1. Missing Packages

In this tutorial, I have only mentioned the main dependencies, but there are also additional packages that might need to be installed separately if they are not already present in your system (numpy, collections, squaternion, etc.). If the training does not run, check the terminal log for missing packages. The message displayed will be similar to:

ModuleNotFoundError: No module named '<MISSING_PACKAGE>'

To fix this error, simply install the package:

sudo apt update
pip3 install <MISSING_PACKAGE>

(a quick note, the package name will not always be the same as displayed in the log output. If PIP is unable to find such a package, see if it does not go by a different name in PIP. Here, google is your friend and you can easily find the right name of the package by googling - “ModuleNotFoundError: No module named ‘<MISSING_PACKAGE>’”)

2. The training starts, but nothing is happening.

It just so happens to be, that the ROS simulation environment uses 3D models that do not come already downloaded with the ROS installation. This means, that they will be downloaded the first time there will be a need for them, as in, the first time you run the training. In such a case, you just might need to wait for some time until ROS has finished downloading them in the background.

For other issues, please use the possibility to submit an issue in the repository.

In the Part 2, we will look at the TD3 network code in detail.

--

--

Reinis Cimurs

Research scientist interested in machine learning, robotics, and autonomous driving