Behavioral Cloning — Transfer Learning with Feature Extraction

Here you can find my experience on the third project of amazing Udacity Self-Driving Car Nanodegree program. The goal of the project is to train your car to drive along the track using data collected from human rides. We were provided with a simulator which is able to work in two modes: training to collect the data and autonomous to check your car behavior.

We learned Transfer Learning right before the P3 and I was excited about Transfer Learning with Feature Extraction approach. Basically, you take existing network and tune top layers to accomplish your goal. I am a software engineer I like reusing, so I decided to choose this technique for my project. Usually, this approach is chosen when your NN is similar to base network. I was not sure feature extraction is good for this task. Anyway, I hoped that my approach would work. You can find my project here.

Some words about intuition behind my decision. We have a pre-trained neural network from ImageNet Competition which successfully identifies objects on the image. A road is an object, so NN must be able to successfully identify road as well. And my task was to “say” the network to drive in the middle of the road.

Some benefits of this technique:

  1. I do not need a lot of data, therefore Udacity data with ~8.000 examples must be more than enough.
  2. Thanks to frozen weights and few images (I use ~400 images) I have significantly reduced training time and was able to play a lot with my car and analyze its behavior.
  3. I had a small hope that pre-trained NN would help me to generalize to track 2 without heavy data-augmentation. Unfortunately, I had to add brightness augmentation to generalize to track2.

I have chosen VGG16 as a base model for feature extraction. It has good performance and at the same time quite simple. Moreover, it has something in common with popular NVidia and comma.ai models. At the same time, use of VGG16 means you have to work with color images and minimal image size is 48x48.

It is important to use balanced data for this project. Since I use only ~400 images I need almost perfectly balanced data set. To reach this goal I use recovery images from left\right cameras and flipped all images I have. Distribution of my trained data looks like this.

Because of VGG16 minimum input size and my computer memory limitations I had to use region of interest technique instead of image cropping to remove unnecessary information from the image.

With all this, my car was able to pass the first track successfully. But was not able to generalize to track 2. I applied random brightness for every image from tests set and my car started to generalize to track 2. You can see a sample batch for my network.

Though my car was not able to generalize to the second track so far (only first 5–6 turns) I got all answers on my ‘why’s I had before this project. And I still believe my car must be able to pass the second track with that minimum I applied to my model, I just need to try better.


There is a list of techniques I found useful for this project:

  1. Double check that your code does what you expect it does (python notebook is useful for this). Shame on me but I spend some time training my car on totally black images!
  2. Yes, I know you’ve read a lot and you know a lot of techniques which must help to drive your car along the track. You want to apply all these techniques at once and get a self-driving car right away. Do not do this! It is better to tune your model step by step analyzing car behavior. You will know what led you to success, what works for your model and what does not. And tuning became easy and predictable!
  3. Balance your data! It is the key point.
  4. When your model is good, validation loss is a good indicator of a better model. Before this, it is good to use callbacks and save your model after each iteration. They allow you to analyze your model behavior! You can find nice instruction in this post
  5. At some time I realized that I use ‘Fantastic’ mode for my simulator instead of ‘Fastest’. Yeah, I was blind. Luckily, I noticed it when my car was able to pass the place after the bridge and then went off the track. I re-run my car in ‘Fastest’ mode and it passed the track right away. Now I think that ‘Fantastic’ mode is a nice mode to analyze your car behavior. It is easier to notice the bad behavior of your car. When it is able to pass the first turn or place after the bridge in ‘Fantastic’ mode, you have a high probability that it would drive amazingly well in ‘Fastest’ mode. Though it is just an idea, can not confirm.