M2M Day 201: Testing my self-driving car on roads it’s never seen before

Max Deutsch
2 min readMay 21, 2017

--

This post is part of Month to Master, a 12-month accelerated learning project. For May, my goal is to build the software part of a self-driving car.

Yesterday, I finished training my self-driving car model on the Udacity dataset. Today’s job was to try to visualize the result:

While I expected the results to be good, the predicted steering is quite natural and not too jittery. I’m very pleased with the outcome.

A few things to note:

  • Originally, I thought there were two Udacity datasets: A training dataset (which I used yesterday) and a testing data (which I accidentally used for training a few days ago). Today, I realized that the testing dataset is actually a subset of the training set, so I decided to instead use some of the Nvidia data for the testing. The important thing here is that the model was trained on the Udacity dataset and tested on completely new terrain from the Nvidia dataset. In other words, the model works well on roads outside of its training set (this is very important if you want a universally functional self-driving car).
  • In order to properly simulate the output of the Udacity model, I needed to do two things: 1. Map the Udacity data into a range of values usable by the Nvidia simulator (the Nvidia model uses degrees as units, while the Udacity dataset ranges from -1 to 1), and 2. Perform some minor pre-processing to the Nvidia testing set (i.e. cropping) to test the Udacity model.
  • During testing, the script rendered images of the steering wheel rotated based on the predicted steering angle, and then I overlaid these renders on top of the original, uncropped Nvidia footage for a slightly wider view.
  • At around 40 seconds into the video, the car comes to a full stop, and then makes a sharp right turn. It seems like the car starts turning before the visuals indicate that it’s meant to go right (the car could go straight after all), so I’m not really sure how this happens. The Udacity dataset doesn’t have any knowledge of this particular turn. The only reasonable explanation is that the model recognized it’s in the turn lane, or the model is just more predicative than a human. Either way, this was a bit surprising, but pretty cool to see.

I’m nearly done with this month’s challenge: I just need to train the model on throttling and breaking, which, I suspect will be virtually identical to the way the model was trained on steering angle (after all, steering angle, throttling, and breaking are all just represented by arbitrarily-defined numbers).

Read the next post. Read the previous post.

Max Deutsch is an obsessive learner, product builder, guinea pig for Month to Master, and founder at Openmind.

If you want to follow along with Max’s year-long accelerated learning project, make sure to follow this Medium account.

--

--