Steering Self Driving Car without LIDAR

Self driving cars use a variety of techniques to detect their surroundings, such as radar, laser light, GPS, odometry, and computer vision. Sensory information is used to identify navigation paths, obstacles and road signs. Autonomous cars have control systems that are capable of analysing sensory data to distinguish between different cars on the road, detect lane lines and predict steering angles

Sensors such as LIDAR are expensive. Velodyne, a leading maker of laser-based LiDAR sells its current mechanical spinning LiDAR devices used in prototype robotic cars for $8,000. Is it possible to use simple camera images and other affordable sensory data to make autonomous car steer through the path?

For Behavioral Cloning project idea is to train a deep neural network to clone driving behaviour to predict steering angles. Udacity’s simulator is used to collect training data and also test model performance. Training data consist of series of images from different cameras (centre , left and right) and corresponding measurements for throttle ,speed, steering angle and brake.

Udacity Simulator
Training image from simulator

For the image on the left we recorded steering angle measurement after driving this car on simulator. Training data was recorded for 4 laps over simulator

Each image is of 160 x 320 x 3 dimension (RGB image) which is further used in training deep neural network for steering angle prediction. Before I jump into model architecture let me show you video of car driving itself in the simulator .

Behavioral Cloning Track Simulation

CNN architecture that I used is inspired by Nvidia’s architecture :

Data Preprocessing

Training image obtained using simulator 160 x 320 x 3 dimension (RGB image) , since majority of time during training on simulator steering angle is zero . I had to randomly clean 80 percent of data with zero steering angle. This is necessary to take care of bias during prediction. And also RGB image is cropped and converted into HLS color space for better prediction. Cropping is necessary as it helps to remove noise from the data, since steering angle is more dependent on perception of road and turns as compared to sky and scenery. Needless to say model performs better after doing data preprocessing.

Data can be biased even after training when lap is biased with turns in either direction (left or right) . To avoid such scenarios , cv2.flip was used to augment data and generate more data sets. Left and right images were used for recovery with a steering correction factor of 0.20.

Neural Network Architecture and Hyper parameters

I decided to start with simple architecture after exploring comma.ai and Nvidia’s architecture . Idea is to start simple and add more complexity if required . Architecture is same as diagram posted above . Dropout of 5% is applied to avoid overfitting and all the layers are followed by RELU activation so as to introduce non-linearity.

‘MSE’ is used to calculate loss and ‘adam’ for optimizer. I find this blog really helpful to understand different gradient descent optimization algorithms. Five epochs were used for training with validation split of 20 percent. After training for five number of epochs my training loss was around 3 percent and validation loss was around 5 percent.

Conclusion and Discussion

Although project was challenging, in the end joy of watching car driving on its own in the simulator made me realise it was worth the effort. Its important to collect training data carefully. Also I am unsure how this technique will be effective in low light conditions. Needless to stay Deep learning, or a class of machine learning algorithms, is showing great promise, primarily because it’s getting results. I am sure there are lot of different techniques yet to be explored but its a great start .

Here is link to my github repo with keras implementation of the above code and write up .

References

Udacity self driving car engineer nano degree.

CS231n course and Andrej Karpathy’s video lectures on convolutional neural networks.

My github repo: https://github.com/linux-devil/behavioral-cloning

Nvidia architecture

Comma.ai architecture

Behavioral cloning project