Unboxing DeepRacer: The latest self-driving toy car from Amazon
Today at AWS:reInvent 2018 in Las Vegas, Amazon released the self-driving toy car for all the developers to have a try on using reinforcement learning for autonomous vehicles.
As the participant of the DeepRacer workshop, I got to build my own model and tried it on the DeepRacer car on the racing track in MGM SpeedWay, the big arena where everyone is competing on the first DeepRacer League. I was also one of the first group of people who take DeepRacer home.
In this blog, I will discuss what DeepRacer is, and why machine learning / IoT enthusiasts should be interested in DeepRacer.
What is DeepRacer?
DeepRacer is the latest self-driving toy car driven by reinforcement learning. Andy Jassy, the CEO of Amazon Web Services just announced about DeepRacer in the keynote session this morning.
DeepRacer is originated from the idea of “Can we (AWS) help developers get rolling with reinforcement learning? (literally)”
Each DeepRacer car is equipped with Intel Atom processor, front HD video camera, accelerometer, gyroscope, suspension, and wheels.
Inside the DeepRacer, there is a convolutional neural network which extracts features from the camera image. Then the features are used to inference the trained reinforcement model created by the users to see what is the best course of action to take next.
What can we do with DeepRacer?
AWS built DeepRacer in a way that it is convenience for the users to focus only on the model training with reinforcement learning. The model inference part explained above about the convolutional neural network is done automatically by the car itself.
To train the reinforcement learning model, AWS provided the tool in AWS console for us to write a Python code for the reward function, specify training parameters e.g. batch size, learning rate, entropy.
Reward function is the most important part of building great self-driving car. If you are interested in reinforcement learning, I suggested to read more about the DeepRacer’s reward function in this guide from Amazon.
We can see the real-time simulation while the model is training from inside the AWS console. After the training is completed, we can evaluate the model by running it on the digital version of the race track of our choice.
After we are satisfied with the model we built, we can download the model into USB drive. Then we can plug the USB drive to the real DeepRacer car and give it a go.
AWS DeepRacer League — the autonomous car competition
AWS announced the competition for people to build the best DeepRacer models. The DeepRacer League will be held many times throughout the year, and the top teams get invited to reInvent to compete in Championship Cup.
For this year, AWS setup the race course in the big arena where anyone can give DeepRacer a try using their own trained model or the pre-built model. Each person is given 4 minutes to run the DeepRacer car around the track as fast as possible.
Wrapping up: My experience with DeepRacer
I found DeepRacer to be a very interesting way to learn how to build self-driving car and also learn about reinforcement learning. Unlike the standard way of training supervised learning model, the reinforcement learning forced me to think differently on how to approach the problem.
The only issue I am finding right now is how do I setup my own track at home or at the office. Hopefully Amazon will release a pre-built track that is easy to install on the floor.
You can find more information about how to build RL model in DeepRacer on AWS Github. You can also see the short preview video from AWS:
I am planning to get my hand more on the DeepRacer after the conference. Feel free to share any tips/tricks you found to build a better reward function!
At Servian, we design, deliver and manage innovative data & analytics, digital, customer engagement and cloud solutions that help our customers sustain competitive advantage. If you need any help from us in these areas, feel free to ask!