Diary of a Self-Driving Car Engineer #2: Intro to NNs & MiniFlow

Entry #2 in my series “Diary of a Self-Driving Car Engineer”

Dhruv Shah
2 min readMay 25, 2018

In the next lesson, Udacity takes us into the world of Deep Learning — we get our feet wet with Tensorflow and Keras, two popular tools used for Deep Learning.

But before we get into it, the curriculum leads explain that while we have tools and platforms that take care of a lot of complex work for us, it’s important to understand exactly what these tools do and how they work.

And so the “unofficial” project of Udacity’s MiniFlow was created; a comprehensive lesson instead of a “full submission” project, MiniFlow is a stripped down, basic version of Google’s Tensorflow library.

The project also helps to simplify and enhance understanding of the concepts of back-propagation and differentiable graph structure.

Back-propagation is the process of updating the weights in a neural network — the “learning” part of deep learning.

Differentiable graphs comprise of nodes that are differential functions (a function you can take the derivative of); Tensorflow is a framework that provides an easy way to create differential graphs.

A neural network is really just several weighted series of functions feeding into each other, which allows it to construct complicated graphs and regions. For example, a single node could take an input and apply a linear function to it, which could then feed into another node which applies another linear function, allowing for a “two-line” bounded region with linear functions only.

Miniflow Architecture & Structure

We start by writing the barebones Node class.

As an exercise for the reader, I’ll leave the code at that — a node as such is created for each “step” in a feedforward node. The node class here doesn’t really represent a node one would draw in a NN graph, because a node in that drawn graph really represents the input, the function applied to the input, and the sigmoid/postprocessing done before the final output is sent directly to a node in the next layer.

We’re then guided through implementing back-propagation and stochastic gradient descent, which is a form of gradient descent that works faster with a lower accuracy, enough to make it work reliably while dramatically reducing training time, for in the real world, complex networks of layers and layers would take eons to train.

Building a neural network from scratch with solely Python was a good exercise and a strong fundamental builder. Now that the students understand the “under the hood” concepts of a framework like TensorFlow. In the following lessons we’ll begin to familiarize ourselves with TensorFlow and Keras to build more complex models.

--

--

Dhruv Shah

Teen full-stack developer & self-driving car engineer. Writer @TheStartup & @hackernoon. But that’s not very detailed, is it? Read more at http://dhruvshah.org/