Udacity’s Self-Driving Car Nanodegree — Term 2 Review

Shahzad Raza
7 min readDec 16, 2017
Courtesy of Udacity’s Term 2 Lecture Material

This post reviews my take on Term 2 of Udacity’s Self-driving Car Nanodegree program. Check out my review of Term 1 here.

Term 2 kicked off for me half-way through September and I managed to complete and pass all project requirements in the later half of November with a couple of weeks to spare before the term deadline. The pace of this term definitely felt a little slower than the first, largely I believe due to some of the mathematical details presented in the term modules as well as the use of C++ instead of Python for the term projects. First things first, let’s talk about preparing for Term 2.

Preparing for the term

As usual, David Silver has an excellent breakdown of Term 2 in his post here that I used as my starting point. This term is all about various forms of Bayesian Filtering methods such as Kalman Filters and Particle filters as well as some control theory in the later half of the term. The biggest change this term was the use of C++ instead of Python for completing term projects. Here are some good resources to look at prior to kicking off Term 2.

  • Udacity: C++ for Programmers. Udacity offers a short course on C++ to brush up on syntax, control flow, OOP concepts as well as vectors which are used heavily in almost all of the term projects. This course is actually part of the Term 2 material but there’s nothing stopping you from getting through it without a project deadline looming over your head :) I found this course to be a good, concise C++ refresher if you’re already somewhat familiar with the language. The best part of it was hearing Bjarne Stroustrup (the creator of C++) talk about different aspects of the language throughout the course.
  • Udacity: Artificial Intelligence for Robotics. This course, taught by Sebastian Thrun, gives a great head start into Term 2. It provides introductory topics for Kalman Filtering, Particle Filter & PID control lectures which are then expanded upon in the Term 2 material. The quizzes in this course are done in Python which helps with practicing concepts prior to implementing them in C++.
  • General: Some basic knowledge of probability is required for the first half of the term. This includes an understanding of uniform and normal distributions, sampling from a given probability distribution, Bayes Rule and the Law of Total Probability. I’d recommend brushing up on these concepts if you haven’t touched them in a while.

Term Breakdown

Term 2 is structured into three modules: Sensor Fusion, Localization & Control.

Sensor Fusion

The sensor fusion module kicks off with a (very) brief introduction to LIDAR and RADAR weighing the strengths and weaknesses of each sensor type. We then jump into the basics of the Kalman Filter and how it can be used for object detection with some practical programming exercises in Python. I found this tutorial particularly useful in getting quickly up to speed with what a Kalman Filter does.

We then move on to the Extended Kalman Filter (EKF) which allows us to model non-linear systems and practice how to fuse noisy LIDAR and RADAR measurement updates to increase the accuracy of the filter when detecting an object. This first project then involves implementing an EKF in C++ using an uncertain state prediction and noisy sensor data to track an object moving around a stationary vehicle within a defined error margin. A lot of the code to communicate with the Term 2 Simulator and class definitions are already provided as part of the project repo so the main task here is implementing the EKF equations and tuning the filter to achieve the defined performance level. Here’s my write-up for the EKF project.

The next lecture deals with the math and concepts behind the Unscented Kalman Filter (UKF) which is capable of tracking non-linear moving objects with higher accuracy than the EKF. There are some great write-ups done by other SDC students on Medium explaining the derivation of the EKF and UKF equations to supplement the lecture material. My recommendation would be to make notes along side the course material for a clear understanding prior to starting the projects. The UKF project is similar to the EKF in the sense that the class definitions and some helper functions are provided and the main task is to properly initialize the filters, implement the predict and update methods based on the UKF algorithm and tune the filter to achieve the desired performance. Here’s a plot showing the accuracy of the UKF I implemented for the second project.

Comparison of estimations and ground truth values using a UKF

Its impressive to see how accurately the UKF is able to track the state of an object even when the parameters, such as the yaw rate, are not observed directly by the sensors.

Localization

The localization module covers Markov localization and deals with the implementation of a Particle Filter which is a different realization of the more general Bayes Filter to do recursive state estimation.

I found this module well laid out and the course notes and lecture videos did a good job of deriving the Markov localization equations and Particle Filter algorithm from first principles. The math can get a bit dense in this module so I’d recommend maintaining good notes and referring to them during the project.

The project in this module involves implementing a particle filter to localize a moving vehicle within a map containing landmarks using the vehicle’s LIDAR sensor measurements. The project repo provides less boiler plate code this time which made the project more interesting to work on and was a good test of the concepts introduced. Here’s my take on the particle filter project.

Control

The final module in this term deals with two specific controllers, the Proportional-Integral-Derivative (PID) controller and Model Predictive Control (MPC).

The PID controller section was rather straightforward with a few lectures on PID control followed by an implementation of the controller to drive a vehicle around a track in a simulator. The interesting part of this project was tuning two separate controllers to control the vehicle throttle and steer respectively to maintain speed and a smooth steering angle.

The Model Predictive Control section contained lectures on the kinematic motion model of a vehicle followed by the implementation of the model in the final project of this term. The key aspect of this project is setting up the cost function for the model to penalize various undesirable actions (e.g. sharp steering changes, over steering) and learning how to properly handle latency that may occur between the calculation of actuator commands for throttle and steer and the actual implementation of these commands by the vehicle.

This project turned out to be quite interesting, here’s a GIF of my model navigating the vehicle around the lake track in the Udacity simulator. The yellow line represents the reference trajectory and the green line the prediction horizon of the controller. Here’s my code for this project.

It’s important to run the simulator in settings that your hardware can handle for both of these projects so that the controllers are able to keep up with the simulator.

Reflections

This term was another solid learning experience with lots of interesting concepts being introduced and implemented. I do feel there is some room for improvement in this term to make it more challenging. In addition to my takeaways for Term 1, here are my thoughts on Term 2:

  • In general, the pace of this term felt slower. I spent between 10–15 hours a week this time around compared to the 20–25 hours in Term 1 and still managed to finish early. I’m not sure whether that’s because of the preparation I did for Term 2 or if Udacity decided to give us some extra time to work on the projects since we had switched to C++.
  • There was definitely more conceptual detail this term than Term 1 had to offer which was refreshing. I was glad Udacity chose to spend time developing the detailed lecture material, particularly for the Localization module. That being said, I think it would be valuable to introduce the general Bayesian filter prior to diving into the Kalman filters or particle filters to give a better understanding of the relationship between the two.
  • I feel there could have been more material added to some of the topics in the term to make the projects more challenging. In particular, it would have great to see some more complex data association techniques discussed in the particle filter lectures to reflect what’s done in practice as opposed to the nearest neighbor algorithm that was implemented in the project.
  • Additional research is invaluable. It helps a lot to find additional resources for each of the topics being discussed in the term to get a better understanding of the material.

I’m now getting geared up for Term 3 which kicks off on January 5th. Between now and then, I’ve got the following lined up:

  • Get some hands-on experience with Fully Convolutional Networks (FCNs) used in the semantic segmentation project in Term 3.
  • Get up to speed with Robot Operating System (ROS) used in the capstone project for Term 3.
So much to do.. such little time.

--

--