Train DeepRacer model locally with GPU support

Jonathan Tse
Jul 2, 2019 · 8 min read

DeepRacer is fun. At the same time it could get really expensive. I just run less than a dozen of training and simulation to prepare the race at AWS Hong Kong Summit and I am getting a bill like this.

Image for post
Image for post
I spent more than $200 in 14 days

When I was lining up for a chance to put the model on a real car to run on the track during the Hong Kong Summit, I talked to several guys and they all spent more than I did. One guy even spent like $700 dollars in just a few weeks. DeepRacer is really a nice platform to learn reinforcement learning but I don’t think it is even remotely reasonable for anyone to spend $100 just to get how it works.

Note that the cost of RoboMaker is especially ridiculous, given that it is just running a simulated environment for the SageMaker to interact with it.

Luckily, Chris Rhodes has shared his project on Github to train the DeepRacer model on your local PC.

In this article, I will show you how to train a model locally and upload the model back to AWS DeepRacer Console and run an evaluation using the model you have uploaded. I will also share my experience of using the GPU to train the model faster.

Training the model

Before we start, I would recommend you to use Linux if possible. I first started to run this on a Mac and there are some issues that you need to fix. Someone also wrote another guide to run this on OSX but I haven’t tried so I cannot confirm whether that works or not. In this article, I will be using Ubuntu 18.04 LTS.

These are the steps that you need to do in order to train the model locally:

  1. Setup python and project environment
  2. Install Docker and configure nvidia as default runtime
  3. Rebuild the docker images
  4. Install minio as a local S3 provider
  5. Start Sagemaker
  6. Start Robomaker

Setup Python environment

Follow this article from Digital Ocean to get your conda environment up and running. The reason I like Conda is that you don’t have to remember where you store your virtual environment. By doing a simple conda env list you can list all environments you have ever created.

So let’s create a virtual environment

conda create --name sagemaker python=3.6
conda activate sagemaker
conda install -c conda-forge awscli

Clone the project from Github

git clone --recurse-submodules https://github.com/crr0004/deepracer.git

Instead SageMaker dependency (under project root)

cd deepracer
pip install -U sagemaker-python-sdk/ pandas
pip install urllib3==1.24.3 #Fix some dependency issue
pip install PyYAML==3.13 #Fix some dependency issue
pip install ipython

Install Docker and configure nvidia as default runtime

Install docker by following the instruction on the docker website.

Since we want to use GPU when we train the model, we need to ask docker to use the Nvidia docker as the default runtime. Refer to this guide to config Nvidia to finish the setup or simply run the following command

# Update the default configuration and restart
pushd $(mktemp -d)
(sudo cat /etc/docker/daemon.json 2>/dev/null || echo '{}') | \
jq '. + {"default-runtime": "nvidia"}' | \
tee tmp.json
sudo mv tmp.json /etc/docker/daemon.json
popd
sudo systemctl restart docker

# No need for nvidia-docker or --engine=nvidia
docker run --rm -it nvidia/cuda nvidia-smi

You should see something like this if you set it up correctly

Image for post
Image for post

Rebuild the docker images with GPU support

As of now there is no pre-built official images that support GPU, we are going to rebuild those docker images with GPU support. You can skip this step if you aren’t planning to use a GPU. If you just want to use my image, just pull this image:

docker pull sctse999/sagemaker-rl-tensorflow

Alternatively, you can build the image by following the steps below:

Build sagemaker-tensorflow-scriptmode:

cd sagemaker-tensorflow-container/docker/1.12.0

You should see something like this when you run docker images

Image for post
Image for post

Next, build the sagemaker-container

cd sagemaker-containers

You may ask why we are using the tag coach0.11-cpu-py3 instead of another name. It is because this is the default image tag that the RLEstimator will look for. If you want to use your own image name, it’s alright and I will show you how to use your own image name later below.

Install minio as a local S3 service

If you refer to the architecture of DeepRacer, S3 is used as a bridge for the communication between SageMaker and RoboMaker.

Image for post
Image for post

Therefore, we need to install minio, a local version of S3. While you can run minio as a docker container, I would recommend you to run it just a service of ubuntu. I have some bad experience when running it as a docker container.

Download the executable from the official

wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
sudo mv minio /usr/local/bin

Put the config file in place

$ sudo vi /etc/default/minio

Paste the following in the file. Remember to edit <<YOUR_PROJECT_PATH>> with a correct path

# Volume to be used for MinIO server.
MINIO_VOLUMES="<<YOUR_PROJECT_PATH>>/data"
# Access Key of the server.
MINIO_ACCESS_KEY=minio
# Secret key of the server.
MINIO_SECRET_KEY=miniokey

Setup the minio as a service

curl -O https://raw.githubusercontent.com/minio/minio-service/master/linux-systemd/minio.service

Note: For Ubuntu 16.04 LTS, you need to change the user and group to ubuntu

Go to http://localhost:9000 to check the installation. Login using the credential you set above (i.e. minio/miniokey) and you should see this.

Image for post
Image for post
Use the button on the bottom right to create a bucket

Create a bucket named bucket.

Image for post
Image for post

Start SageMaker

Copy SageMaker config file

mkdir -p ~/.sagemaker && cp config.yaml ~/.sagemaker

Create a docker network named sagemaker-local

docker network create sagemaker-local

Check the subnet of the docker network

docker network inspect sagemaker-local
Image for post
Image for post

Edit rl_coach/env.sh, comment S3_ENDPOINT_URL and use the line below.

# export S3_ENDPOINT_URL=http://$(hostname -i):9000
export S3_ENDPOINT_URL=http://172.18.0.1:9000

Source the env.sh

cd rl_coach
source env.sh

Before you start SageMaker, copy your local reward function and model_metadata.json to S3 (minio). If you want, you can edit the reward function before copying to S3.

aws --endpoint-url $S3_ENDPOINT_URL s3 cp ../custom_files s3://bucket/custom_files  --recursive

Optionally, use your own image created above to utilize the GPU. Comment out the line image_name in rl_deepracer_coach_robomaker.py.

base_job_name=job_name_prefix,
# image_name="crr0004/sagemaker-rl-tensorflow:console",
train_max_run=job_duration_in_seconds, # Maximum runtime in second

The RLEstimator will look for the default image, which is named 520713654638.dkr.ecr.us-east-1.amazonaws.com/sagemaker-rl-tensorflow:coach0.11-cpu-py3

Run SageMaker

python rl_deepracer_coach_robomaker.py

If everything run smooth, you should see something like this:

Image for post
Image for post

At this point, the SageMaker is waiting for the RoboMaker to start the training

Start RoboMaker

RoboMaker is the service that provide the simulation environment. Before we start, we need to config the track to be used. You can edit the robomaker.env to choose the track you want to use. The default track is set to Tokyo track for now. If you want to use the reinvent track, you can set the reinvent_base

# WORLD_NAME=Tokyo_Training_track
WORLD_NAME=reinvent_base

In addition, if your docker network is something different from 172.18.0.1, remember to change the parameter S3_ENDPOINT_URL as well.

S3_ENDPOINT_URL=http://172.18.0.1:9000

If everything is good, run this command under the project directory

docker run --rm --name dr --env-file ./robomaker.env --network sagemaker-local -p 8080:5900 -it crr0004/deepracer_robomaker:console

If you see something like this, congratulation you made it!

Image for post
Image for post

On the other hand, you should see the SageMaker outputting something like this:

Image for post
Image for post

You can also check the simulation process by using vnc client to connect to RoboMaker

gvncviewer localhost:2180
Image for post
Image for post

Submit the locally trained model to DeepRacer Console

By following this tutorial, you first create a 5-minute dummy training job. Then the necessary folders on S3 will be created for you. You could upload your locally trained files to the S3 bucket.

Image for post
Image for post

And go back to the DeepRacer console and start the evaluation. It works.

Image for post
Image for post

After the evaluation is finished, you could also download the model and supposingly put it on a DeepRacer to run. However, because I don’t have a DeepRacer vehicle yet (I ordered back in 2018 Nov and AWS has NOT delivered it to me for over 7 months!), I cannot confirm whether it works or not. Let me know if you have done this and I will supplement here so everyone know.

Other Issues

I am experiencing OOM error on both my local machine and an AWS g3s.xlarge instance. I observe that probably the memory is not enough when tensorflow want to get more memory. Chris said there is a memory leak.

Image for post
Image for post

I decided to try on a p3.2xlarge which get a tesla V100 with 16GB (i thought it is 32GB) hoping that it won’t get a OOM too soon and I can have the model trained before the OOM happen. Surprisingly, it looks like it wasn’t a memory leak because the v100 could run for a few hours without any issue.

So if you are experiencing OOM you have two choices:

  1. Train locally using CPU
  2. Train with a p3.2xlarge instance

Tips: You can save money by using spot instance. To create a spot instance easily, use it together with AMI which allow you to create spot instance relatively easier

Someone on stackoverflow said OOM issue could be avoided by setting a smaller batch size. I tried but it doesn’t work for me.

Way Forward

Fix the OOM issue

This is obviously the first priority. Let’s see if guys from AWS would get this fixed very soon.

Train based on an existing model

Someone has done this already. I didn’t try so I can’t comment. Hopefully I will supplement to this article when I try it. You may find more information on how to do this on the issues or wiki page.

A local web console

There are still a lot of manual work to train a model and upload it to DeepRacer console. If a local web console is available, it would make this much easier to use.

Credits

  1. Chris Rhodes for creating this project
  2. @legg-it — He showed me how to build the docker images in this github issue

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store