Self-Driving Car Engineer Diary — 5
Sat, 11/Feb/2017
Hi. We were introduced to Keras and I almost cried tears of joy. This is the official high-level library for TensorFlow and takes much of the pain out of creating neural networks. I quickly added Keras (and Pandas) to my Deep Learning Pipeline.
Armed with these new tools, I jumped into Project 3 … Behavioural Cloning. The goal was to take the Udacity car simulator and initial data and create an autonomous agent that successfully drives Training Track 1 (flat, bright, meandering turns). This was similar to the open-source competition that our ai-world-car team competed in last year. The real test was to see if your model generalised to be able to drive the ‘unseen’ Test Track 2 (mountainous, dark, sharp turns). Track 2 was a little reminiscent of the Great Robot Race.
Students started with an existing model before then modifying / fine-tuning them. I decided to go with the Nvidia model which was a great place to start. After listening to earlier co-hort students experiences and absorbing Vivek Yadav’s excellent post (the guy is a genius … just saying) I applied the following:
1. truncated highly logged angles to reduce bias,
2. shifted left and right front facing driving images with corresponding steering angles adjustments to generate Recovery training examples, instead of manually recording them,
3. cropped the camera images to the road and low horizon to help the model focus it’s learning on the important features of the image,
4. randomly applied shadows to the training dataset as Test Track 2 is dark, and
5. randomly mirrored the images and their corresponding steering angles to reduce the left-steering bias in the training dataset.
Results