M2M Day 190: The car is driving itself!

Max Deutsch
2 min readMay 10, 2017

--

This post is part of Month to Master, a 12-month accelerated learning project. For May, my goal is to build the software part of a self-driving car.

Yesterday, I figured out how to train my self-driving car, but I struggled to confirm that the training was actually effective.

Today, I quickly realized that the part of the program that wasn’t working only had to do with visualization. In other words, I was able to delete all the visualization code, and still successfully output the real-time steering commands for the car.

While these numbers are enough to get a self-driving car to execute the instructions, as as human, this output is challenging to appreciate.

Luckily, a few days ago, I figured out how to save individual processed frames to my local machine.

So, I decided to output from the program the individual frames of input video and the predicted steering wheel animation.

I then combined the individual frames, overlaid the two videos, and pressed play. Here’s the result…

I’m really excited about this!

To clarify what’s happening: The program watched the low-quality dash cam footage and then autonomously animated the steering wheel based on the self-driving model I trained yesterday. In other words, the computer is completely steering this car, and doing a pretty solid job.

The next step is to learn more about the underlying code, optimize it for general use, and then see how it performs on different datasets (i.e. on different roads). I’ll start with the Udacity data set.

I’m still not quite sure I’m ready to sleep in the back of my self-driving car just yet, but today definitely marks a big step forward.

Read the next post. Read the previous post.

Max Deutsch is an obsessive learner, product builder, guinea pig for Month to Master, and founder at Openmind.

If you want to follow along with Max’s year-long accelerated learning project, make sure to follow this Medium account.

--

--