Autonomous Journey through Term 2 of Self-Driving Car Nano-degree with Udacity

Hey Folks,

I am back to you after a lot of cool experience and stupendous learning about Self-Driving Cars. I am privileged to share with you today that after some really astonishing checkpoints, I have completed the 2nd Lap of this breathtaking race of Self-Driving Car Nano degree (SDCND).

Yes, I have completed the 2nd Term of Udacity’s SDCND program.

Term 2 Completion - Self Driving Car Nano-degree

And I have already enrolled for the final lap of this race, Term 3 of this program which I will be starting in Jan 2018.

About to start final Lap of this race

Acknowledgements:

I am really very grateful and thankful to the almighty God, my financial sponsors, David Silver, Nehal Soni, my classroom mentor Martijn de Boer, Udacity CarND staff, my friends/guides Jeremy Shannon Subodh Malgonde Mithi Davinder Chandhok Oleg Potkin Param Aggarwal, entire student and mentor community of Udacity, my current employer, colleagues, friends, family and each & everyone who has made this journey possible and such a success for me. I can’t be thankful enough for all the love, blessings, encouragement and support I have received to pursue this program and follow my passion.

So what was Term 2 all about, you ask?

Well, it was about Perception and Control of the Autonomous Vehicles. Lets first talk about how we humans drive a vehicle on the road. The first thing we do is we see our surroundings, the road where we are suppose to drive on, other vehicles on the road, pedestrians, other objects, right?

Off course, if we want our vehicle to be intelligent enough to drive itself, we have to give these capabilities of perceiving the environment to our vehicle. So how should we help our vehicle on this front?

Sensor Fusion to the rescue:

We use sensors like Radar and Lidar to observe what is happening around us. We then apply Bayesian Filtering techniques on the data collected by our sensors to help our vehicle to track something and predict the next state. The Bayesian Filters which I used for object tracking (Pedestrian/Bicycle/Other Vehicles) are Extended Kalman Filter and Unscented Kalman Filter. These filters based on our previous state of the system and current observation are capable to predict the future state of the object we are tracking. Kalman filters are highly efficient to compensate for process and measurement noises and predict actual system state in the real time based on the inputs we provide - past state and current measurement input. Over and above that, these filters also provide us an accurate way to fuse together the data from different kinds of sensors (Radar and Lidar for example) which measures different quantities of the physical system and convert them into the required prediction (vehicle position and velocity in our case). Checkout the simulator output of tracking a bicycle using Kalman Filters below:

Unscented Kalman Filter - Object Tracking

https://github.com/vishalrangras/P6-Extended-Kalman-Filter

https://github.com/vishalrangras/P7-Unscented-Kalman-Filter

So now that we can track our surrounding vehicles, pedestrians and objects, what do we need next to make our Vehicle Autonomous?

Do we need to know where we are and where we need to go? Yes, that would certainly help to make the driving decisions.

But how do we do that? Another Bayesian Filter? Yes off course, why not.

So which filter this time, you ask? Well, its called a Particle Filter and it is based on Markov Localization. What it really does is it gives us the belief of where our vehicle would be in the given map based on the observation data, control information and a feature map. The observation data is in terms of the Vehicle Coordinate system which we need to convert into Map Coordinates in order to successfully localize our vehicle on the Map. Once we know where we are localized on the map, we can take control decisions to reach our desired destination. Off course it is more complex than it sounds, we need to accommodate Path Planning Strategies which I will learn about in Term 3 and we actually need to actuate the vehicle with the help of our control system.

Localization and Particle Filter in Nutshell:

In particle filter, we start with a map of our system which is filled with lot many particles. Each particle is given equal weight, which means equal preference throughout the map. Each particle denotes the probability of Vehicle being at the same location as the particle.

As our observation sensors record measurements and our control actuators perform actuation w.r.t. time, we use this data and previous state of our system to compute the weight of each particle and hence its probability. The particles which most likely can express the location of our Vehicle are assigned higher weights while the other particles assigned lower weights.

As the time proceed, with some calculation and some randomness, lower weight particles are killed and only particles with higher weights survive. If our Particle Filter is working fine, within very short time interval, only few particles with higher weights will survive which will denote the location of our vehicle on the map. At this point, our Vehicle is localized and next point onward, we keep tracking our vehicle in the real time. You can see one such simulation of the Particle Filter based Localization below:

https://www.youtube.com/watch?v=ZyVWLw0dPN0&feature=youtu.be

And here is the screenshot for Kidnapped Vehicle Project of SDCND which uses Particle Filter Implementation for localization of our vehicle:

Particle Filter Implementation

https://github.com/vishalrangras/P8-Particle-Filter

Well, this was the case when we already have the map of our environment available and we assume that it remains unchanged. However, if we don’t have the map or the map is changing with time, then we also need to simultaneously generate the map of our environment which then becomes SLAM (Simultaneous Localization and Mapping) problem.

Control System - PID and MPC

So far we have seen how our autonomous vehicle can perceive its environment using Sensor Fusion and how do we localize ourselves in that environment, all of it possible due to Bayesian Filters and Probabilistic Robotics. Once our intelligent system understands its environment, it can make actuation decisions to change the environment, or in our case, move in the environment. This is where Control System comes into the picture.

PID (Proportional - Integral - Differential) Controller: The basic idea of control theory in nutshell is to compute error in our output based on the reference value and then change the system gains such that the output becomes as close as possible to the desired reference value. For a Self-Driving Car, there are so many factors which becomes part of Control System and their are various Control Topology available, however, let us talk about the very basic control of the car, i.e. steering wheel. We steer our car in such a way that we almost stay in the center of the lane we are driving in. We need to make our intelligent car do the same.

As a measure of error, we consider Cross Track Error, which can be considered as the offset distance between the line passing through the center of lane and a line passing through the center of car. Based on the value of the Cross-Track Error, our car should be able to steer such that it drives in the middle of the lane as much as possible.

More details about PID tuning and project implementation can be found in my Github repository. Don’t forget to checkout the Reflection.md to know about the tuning approach.

https://github.com/vishalrangras/P9-PID-Control-Steering

MPC (Model Predictive Control): In PID Control we were focused only steering control but in MPC, we also actuate/control the throttle of the vehicle. And the reason we are able to do this along with steering is because MPC is more robust compared to PID and it accounts for the Vehicle Model and Dynamics. It can account for way more parameters which includes kinematics as well as dynamics of the vehicle and is capable of actuating more efficiently even at higher speeds while compensating for system lags.

In this control topology, cross track error and orientation error were used as error metrics. The topology approximates a continuous reference trajectory by means of discrete paths between actuation. Cost optimization is done in order to actuate the system such that errors can be reduced.

Model Predictive Control - Autonomous Vehicle

In the above GIF, the yellow line indicates reference trajectory while the green line indicates the MPC trajectory path. More details can be found in my GitHub repo and reflection.md

https://github.com/vishalrangras/P10-Model-Predictive-Control

So what next?

In the final Term of Self-Driving Car Nano-degree program, I will be learning about:

  • Behavioral Planning, Trajectory Generation and Path Planning
  • Fully Convolution Networks, Scene Understanding and Semantics Segmentation
  • Functional Safety, Hazard Analysis and Risk Assessment
  • Autonomous Vehicle Architecture and Robotic Operating System
  • Caption Project - System Integration Project

Please feel free to share your valuable insights, thoughts and feedback.