Self-Driving Car Engineer Diary — 3

Andrew Wilkie
2 min readJan 3, 2017

--

Visualisation of self-taught ‘Weights’ (left) aka the distilled ‘knowledge’ of a DNN model.

Tue, 3/Jan/2017

Hi. Dived into part 1 of Deep Learning during the last 2 weeks. The course covered a lot of material over 7 lessons, defined with succinct, well commented, executable code … just how I like it! Even then, I found I kept creating Google Sheets of the various calculations to feel really comfortable before moving onto the next concept. Here is an extremely condensed list of the many learning highlights …

  1. Simple Neural Networks
Components of a simple linear classifier network.

Simple model with linear transformation and softmax probability of the class scores resulting in a single ‘1-hot’ label.

2. Deep Neural Networks

How a network knows if it needs to keep learning or not.
Forward network flow (top, L to R) then partial derivative Backpropagation flow (bottom, R to L).

Minimising cost (loss) aka the network teaching itself by looping through to find the lowest mean-squared-error (MSE) of the classification result by first processing forward through the model, then applying a learning rate (step) to Stochastic Gradient Descent and Backpropagation in a backward pass to see it’s impact on cost given a small change in output from layer 1.

3. Convolutional Neural Networks

Filters stride across each layer, grouping and condensing like items until the model can classify an object in an image.
A CovNet in 1998 ‘recognising’ handwritten letters … amazing!

Next task : The LeNet model will be the basis for tackling Project 2 : Traffic Sign Classifier. In the meantime, why not have a play with your own neural network. Have fun!

Tinker with a TensorFlow network in your browser!

--

--