Behaviour Cloning using Deep Neural Network
The steps of the project are as follows:
- Data collection using the simulator. Visualization of the
steeringvalues distribution and using augmentation techniques.
- Normalization of the dataset and cropping
ROIfrom the images.
- Appropriate architecture is defined and implemented in
Keras. Overfit of the model is reduced.
- Tuning the parameters of the network e.g. using
Adam Optimizerfor back propagation and
MSEfor error tracking.
- Train and validate the model with a training and validation set.
- Test that the model successfully drives around the track without leaving the road.
1- Data Collection & Augmentation:
The data is collected in different stages which are as follows:
- Dataset provided by Udacity for Track 1.
- Dataset for all the curves and turns in the track.
- Dataset for all the curves and turns in the track by driving the car in the opposite direction.
- Dataset for the straight roads in the track.
i- Data Exploration:
Following histogram shows
steering distribution in the Udacity dataset.
To better visualize, this is the histogram with the near 0
steering values removed. 0
steering means that car is going straight.
This histogram shows the distribution of
steering values after including the datasets from all the 4 sources discussed above.
After including the data from all 3 cameras “mounted” on the car.
Following histogram shows the distribution after data augmentation as well as flipped images.
Note that the mean and the standard deviation of the dataset is not centered around 0, something which we will deal with in our pipeline by normalizing the dataset.
ii- Data Visualization:
Here, a few of the training images along with their flips are shown. As we can see, there is a lot of information in the frame which may not be helpful in inferring the steering values and also potentially make the classifier training process slow.
Here are a few images from training dataset:
iii- Normalization and Cropping:
As can be seen from the histograms, the dataset is not normalized i.e the mean is not around 0 and standard deviation is not around 1.
The data distribution before and after the normalization is as follows:
Normalization Before: Mean
8.184e-06 Standard Deviation
0.229 After: Mean
1.653e-18 Standard Deviation
Each image contains the parts which are not really very useful for training the classifier and can be removed to speed up the training processing.
For example, the sky, hills and trees remain same in the consecutive frames and is something which doesn’t contribute in inferring the steering values from the images.
We can remove these features from the images by defining the Region of Interest
Normalization and cropping steps are part of the
Keras model. The advantage is that being part of the model it can parallelized on the GPU.
Here are a few images after cropping:
2- Neural Network and Training Strategy:
i- Model Architecture
The architecture is a modified form of the model used by NVIDEA and described in a paper End to End Learning for Self-Driving Cars.
The model is implemented in
Keras. Following are the layers of the model:
- 5 Convolution layers with respective Filters and Kernels.
- 3 Fully Connected (Dense) layers each followed by Dropouts.
- 1 Output layer.
ii- Training Strategy:
- The dataset is very large where each image’s dimension is
160x320x3. Also, in preprocessing the data type is changed from
floatmaking the size of dataset even larger.
- Using generator we can pull pieces of data on the fly only when we need them. Size of the data is specified by
Transfer learning technique can be used to train and test classifier on a selected dataset at a time and use the optimized weights the next time on a new dataset. The technique helped in spotting and fixing the problems in the training process very efficiently.
The steps are as follows:
- Train on a dataset and save the model.
- Test the model and see which regions of the track the car has problem navigating on.
- Collect data of that region of the road and train the classifier again by using the weights from the previously trained network.
Dropout is used after each Fully Connected (Dense) layer.
Dropout was proposed by Geoffrey Hinton et al. It is a technique to reduce overfit by randomly dropping the few units so that the network can never rely on any given activation. Dropout helps network to learn redundant representation of everything to make sure some of the information retain.
Dropout Parameter: Keep Probability 0.5
Adam Optimizer is used to update the weights. Mean Square Error (MSE) is used to keep track of the errors.
3- Testing the network:
After training the network, it is tested on the track by allowing the car to navigate autonomously and validating that the car doesn’t drop off the road.
Udacity provided a simulator that acts as a server which provides stream of the
image frames as features and the classifier predicts the
steering values as labels.
steering values are used to navigate the car on the track.
Here is the video of the car driving autonomously on the track 1:
Here is the video of the car driving autonomously on the track 1 in opposite direction: