A Journey from Introduction to Artificial Intelligence to Self-Driving Cars

Luis Vivero
9 min readJul 31, 2018

When I was 10 years old, my favorite TV program was Whiz Kids—a science fiction show about four high-school teens that solved crimes with the help of an in-house made super computer. Back then, it served as an inspiration to decide that I wanted to become a Software Engineer and create my own robot someday. In 1997, I graduated from Tec de Monterrey with a diploma on Computer Systems Engineering, but for several implications, I couldn’t continue a master’s degree on Artificial Intelligence as I desired to. Instead, I started my professional career by working for different companies related to the TeleCom Industry, and building a relatively successful career oriented to Mobile Applications Development until today.

Intro to AI

In July of 2011, I saw a post a friend of mine had shared on Google+ inviting everyone to join a free online course created by two distinguished professors at Stanford; it was an Introduction to Artificial Intelligence by Peter Norvig and Sebastian Thrun. In just a few weeks, the number of people who had registered from all around the world reached more than 160,000. The course lasted for 10 weeks, and we learned the basics of statistics, uncertainty and Bayes networks, machine learning, logic and planning, robot motion planning, natural language processing, and a few other fundamentals of Artificial Intelligence. Just after it was completed by December, I received an invitation to enroll for the first official course made by Udacity—once again teached by professor Sebastian Thrun. It was Programing a Robotic Car, which lasted for 3 months; in the course, we learned about basic methods in Artificial Intelligence which included probabilistic inference, planning and search, localization, tracking and control, all with a focus on robotics. Since then I became an advocate student at Udacity, which gave me the opportunity to take many online courses and Nanodegree Programs developed by top engineers from Google and other high-tech companies. For me, it was like having the opportunity to be taking classes from Stanford, MIT, or any other highly recognized university in computer systems.

Self-Driving Cars

Another great opportunity came in August of 2016, when I saw an invitation to apply and be part of the first cohort at the Self-Driving Car Nanodegree. I was fortunate enough to be selected from more than 11,000 applicants around the world and be part of the first 500 students who would start the program in October of the same year. The program was amazing, and consisted of three different terms of about 3 months each which included implementing the following projects:

Term 1: Computer Vision and Deep Learning

  • Finding Lane Lines on the Road. The goal of this project was to write code to identify lane lines on the road; first in an image, and later in a video stream. My 13-year-old daughter watched with me all the lessons required to complete this project and solved the quizes with me. Later we found that this was the easiest project in the entire program, but I was very excited to take all this lessons with her and be proud that she could answer correctly all the quizzes in those lessons. I’m sure that she’s ready to take the Intro to Self-Driving Cars Nanodegree!
Finding Lane Lines on the Road
  • Traffic Sign Classifier. This was the third project, and by far the most challenging in the term. I didn’t want to use the recommended GPU server configuration available for Udacity students at Amazon Web Services, so I remember that I spent several weekends and afterhours only to complete the setup of the CUDA libraries and be able to use the GPU available on my old MacBook Pro from 2011 from the Intel Graphics card. At the end, it was very rewarding to finally made my deep network to successfully classify all the different traffic signs provided on the dataset.
Traffic Sign Classifier
  • Behavioral Cloning. This was a really fun project to work on since we used for the first time Udacity’s track simulator. We needed to train a deep neural network with data from the real world to drive a car like a person would do.
Behavioral Cloning
  • Advanced Lane Finding. For this project, our goal was to write a software pipeline to identify the lane boundaries in a video from a front-facing camera on a car. We used techniques such as distortion correction and gradient tresholding to improve the algorithm to find lanes.
Advanced Lane Finding
  • Vehicle Detection and Tracking. Similar as the previous one, our goal for this project was to write a software pipeline to identify vehicles in a video from a front-facing camera on a car. Another challenging but fun project!

Term 2: Sensor Fusion, Localization and Control

  • Extended and Unscented Kalman Filters. Sensor fusion engineers from Mercedes-Benz showed us how to program fundamental mathematical tools called Kalman filters. For this project we implemented an extended Kalman Filter in C++. These filters can predict and determine with certainty the location of other vehicles on the road in real time!
Kalman Filters
  • Kidnapped Vehicle. Localization is how we determine where our vehicle is in the world. GPS is great, but it’s only accurate to within a few meters. To achieve a single-digit centimeter-level accuracy, engineers from Mercedes-Benz demonstrated the principles of Markov localization to program a particle filter, which uses data and a map to determine the precise location of a vehicle. In this project, we built a particle filter and combined it with a real map to localize a kidnapped vehicle.
Kidnapped Vehicle
  • PID Controller. In this project, we implemented a proportional-integral-derivative (PID) controller in C++ to maneuver the vehicle around the Udacity simulator track. Ultimately, a self-driving car is still a car, and we need to send steering, throttle, and brake commands to move the car through the world.
  • Model Predictive Control. This was another really fun project to implement. The goal was to drive the car around the track once again using the Udacity simulator; this time, however, we were not given the cross track error, so we had to calculate that by ourselves and needed to consider the aditional 100 millisecond latency between actuation commands on top of the connection latency. Really fun!
Model Predictive Control

Term 3: Path Planning, Concentrations and System Integration

  • Path Planning. This was by far the most exciting but also the most challenging project in the entire Nanodegree program—at least in my opinion. We needed to design a path planner that is able to create smooth, safe paths for the car to follow along a 3-lane highway with traffic; to achieve that, we needed first to apply model-driven and data-driven approaches to predict how other vehicles on the road will behave. Then we constructed a finite state machine to decide which of several maneuvers your own vehicle should undertake. And finally, we generated a safe and comfortable trajectory to execute that maneuver. I spent countless hours and weekends implementing this project, but in the end, it was very rewarding watching the car cruising safely in the simulator’s highway with traffic on it. Making this project also gave me the opportunity to participate in the Bosch Path Planning Challenge!
Path Planning
  • Semantic Segmentation. This was an elective project and we learned about semantic segmentation and inference optimization, which are active areas of deep learning research. The objective of the project was to label the pixels of a road in images using a Fully Convolutional Network (FCN). This time I used a GPU from Amazon Web Services and worked on a challenging dataset: the Cityscapes dataset, which has fine image annotations for 29 classes of objects. The images are video frames taken from several cities in Germany, with around 11GB of them. This sample comes from the City Scapes dataset:
Semantic Segmentation
  • Functional Safety. This was also an elective project where we learned about functional safety frameworks to ensure that vehicles are safe, both at the system and component levels. A report was created that included developing a hazard and risk analysis, safety concepts, and engineering safety requirements.
Functional Safety
  • Programming a Real Self-Driving Car. This was the capstone of the entire Self-Driving Car Engineer Nanodegree Program. In this project we ran our code on Carla, the Udacity self-driving car, and the Robot Operating System which controls her. We worked with a team of the Nanodegree students to combine what we learned over the course of the entire Nanodegree Program to drive Carla around the Udacity test track.
Steven Welch kindly showed us CARLA, Udacity’s Self-Driving Car, to me and my family. Thanks Steven!

On November 30th there was a graduation celebration which was attended by about 300 students, many of them traveled from different countries just for this event. It was amazing to meet in person and to have a chance to chat for a few minutes with Sebastian Thrun. Another great thing was to get to know other outstanding people from the Udacity staff like David Silver, Ryan Keenan, Steven Welch, Andy Brown as well as other students from this program whom I knew before from chats on the forums or Slack. It was an emotive graduation celebration and I was lucky enough that Sebastian could sign my old certificate from Udacity’s first on-line course: CS373 Programming a Robotic Car.

CARLA — Udacity’s Self-Driving Car at the parking lot
Sebastian Thrun — Udacity’s founder, President & CEO of Kitty Hawk
David Silver — Udacity’s Self-Driving Car Program Curriculum Lead
A very Happy graduate!
Sebastian was very kind to sign my certificate from CS373: Programming a Robotic Car, the first on-line course offered by Udacity
with Andy Brown, curriculum lead for CS373: Programing a Robotic Car, and now also for the Flying Car Nanodegree program.

The next day after the Graduation Celebration I visited Udacity’s headquarters in Palo Alto, CA. with my family.

The future of Self-Driving Car Engineering is here ;-)

What’s Next?

Few weeks after graduating from the Self-Driving Car Engineer program, we received an invite to apply for the Flying Car Nanodegree Program. This was also an amazing course, which I just complete this July, and I highly recommend it for anyone that would like to learn about autonomous flight and drone robotics. There’s even a Free Classroom Preview!

--

--

Luis Vivero

Software Engineer, Mobile Applications Developer and Self-Driving Cars enthusiast.