You don’t need lots of data! (Udacity Behavioral Cloning)

An Nguyen
4 min readFeb 1, 2017

--

This post is to give better understanding about balancing data to train car model in Udacity Behavioral Cloning project.

Intro: Driving the car around a track to collect data and train it on neural network so that the car can eventually predict the angle to steer by itself.

Track 1

Challenge topic: Data!

I am certain that I am not the only one who admitted the challenge for this project is data. There are many posts out there talking about the whole process of training but I’ll just really focus on data topic alone here since there seems to have a common misconception that big data is the only way to make this model working. If you read this, it may save you a 50 hours trip of tuning.

For this project, just assuming you have a working neural network architecture (using Nvidia/comma.ai/VGG16/your own), you nail it 20%. The other 80% is your data. Just like anything we do, the longer we practice, the better we are good at it because we take in hour and hour of data into our brain memory/muscle memory. It’s the same here for neural net, the more variety of data you have to train your network, the better the model is at the task.

So let’s assume, you need to drive on a same road 100 times beautifully to collect a massive number of data samples to make your model to work. Instead, you only drive 8–10 times with 3 cameras (central, left, right), and thinking “hmm, can I make this work?”.

The answer is YES!

Data Augmentation

Thanks to Computer Vision & Statistics. You can rotate, translate, flip, shift, transform to darker or brighter…however you want to your data with a few cv2 lines of code. Why though? Because doing these you can generate many kind of environments you think the car can be in. For example:

  • I collect data on a bright beautiful day, what if it’s dark or raining when my autonomous car runs on a street, that wouldn’t be good because my car can be confused. So I should apply a range of random brightness to all of my training images by adjusting V channel on HSV. Then there are going to be a mix of dark and bright images to train.
  • Do I want to flip the image? Probably, it will look like the car drives in the opposite direction with opposite angle. But which image to flip? How about flipping a coin and let the coin decides? Then I will have a range of random data with opposite directions to train.
  • Do I want to rotate the image? Good idea, it may apply well for hilly environment even we collect data on a flat environment.

Those are just a few examples you can do with data augmentation to generate your data randomly as much as possible on purpose.

Balance is the Key

I live in U.S midwest area, the road is pretty flat and straight to drive. But that doesn’t mean we don’t have to turn once in awhile. Same thing in this project, the training track is pretty flat and straight driving with light steering most of the time. But there are also a few sharp turns that can throw the car off the track if it doesn’t pay attention.

The problem is, too much straight driving data with limited turning data isn’t very safe to teach the car how to drive, especially if I aim to train it on small original data set with random augmentation. Hence, I can either down-sample the straight driving data or increase left and right turning data so all aspects are balance.

To be able to do so, I can take advantage of LEFT and RIGHT camera images. I can target when the car turns left, use its right camera images and add some adjustment angle into current steering angle so the car can turn a little more if it’s in such kind of position. Similarly in the opposite direction, I can do the same with the left camera images when the car turn right. By applying these, I add some RECOVERY into my data set so the car can recover when it is in such sharp turns.

The Powerful fit_generator()

At first, I didn’t like this one so much comparing to model.fit() on Keras. Well, because it was kinda confusing to understand. Now I understand it, I love it! Especially for my purpose of low original training data. To use fit_generator(), you need a generator code to feed batches into the model. The way we apply the generator with augmentation is:

Generator > Pick out a batch of random images from the original training data > Apply random augmentation > Feed to the model to train > Model is done training with that batch, destroy the data > Repeat the process

With random augmentation directly on Generator, it’s almost unlikely that the training data feeding into fit_generator() are the same. So you end up always having new data to train without taking use a lot of your GPU/CPU memory.

For my extra write-up about fit_generator(). See here: https://medium.com/@fromtheast/implement-fit-generator-in-keras-61aa2786ce98#.9ivzotkz7

I applied what I said above with 9000 balanced dataset on 20–25 epochs with 9000 samples per epoch, and here is the result:

Track 1:

Track 2:

(Reference: https://github.com/ancabilloni/SDC-P3-BehavioralCloning)

--

--