Deep Learning 101
Published in

Deep Learning 101

Video of a neural network learning

As part of my quest to learn about AI, I generated a video of a neural network learning.

Many of the examples on the Internet use matrices (grids of numbers) to represent a neural network. This method is favoured, because it is:

  • mathematically equivalent to a neural network
  • computationally faster

However, it’s difficult to understand what is happening. From a learning perspective, being able to visually see a neural network is hugely beneficial.

The video you are about to see, shows a neural network trying to solve this pattern. Can you work it out?

Training Set

It’s the same problem I posed in my previous blog post. The trick is to notice that the third column is irrelevant, but the first two columns exhibit the behaviour of a XOR gate. If either the first column or the second column is 1, then the output is 1. However, if both columns are 0 or both columns are 1, then the output is 0.

So the correct answer is 0.

Our neural network will cycle through these 7 examples, 60,000 times. To speed up the video, I will only show you 13 of these cycles, pausing for a second on each frame. Why the number 13? It ensures the video lasts exactly as long as the music.

Each time she considers an example in the training set, you will see her think (you will see her neurons and her synaptic connections glow). She will then calculate the error (the difference between the output and the desired output). She will then propagate this error backwards, adjusting her synaptic connections.

Green synaptic connections represent positive weights (a signal flowing through this synapse will excite the next neuron to fire). Red synaptic connections represent negative weights (a signal flowing through this synapse will inhibit the next neuron from firing). Thicker synapses represent stronger connections (larger weights).

In the beginning, her synaptic weights are randomly assigned. Notice how some synapses are green (positive) and others are red (negative). If these synapses turn out to be beneficial in calculating the right answer, she will strengthen them over time. However, if they are unhelpful, these synapses will wither. It’s even possible for a synapse which was originally positive to become negative, and vice versa. An example of this, is the first synapse into the output neuron — early on in the video it turns from red to green. In the beginning her brain looks like this:

Our neural network before she starts training.

Did you notice that all her neurons are dark? This is because she isn’t currently thinking about anything. The numbers to the right of each neuron, represent the level of neural activity and vary between 0 and 1.

Ok. Now she is going to think about the pattern we saw earlier. Watch the video carefully to see her synapses grow thicker as she learns.

Video of our neural network learning.

Did you notice how I slowed the video down at the beginning, by skipping only a small number of cycles? When I first shot the video, I didn’t do this. However, I realised that learning is subject to the ‘Law of diminishing returns’. The neural network changes more rapidly during the initial stage of training, which is why I slowed this bit down.

Now that she has learned about the pattern using the 7 examples in the training set, let’s examine her brain again. Do you see how she has strengthened some of her synapses, at the expense of others? For instance, do you remember how the third column in the training set is irrelevant in determining the answer? You can see she has discovered this, because the synapses coming out of her third input neuron have almost withered away, relative to the others.

Our neural network after she has finished training.

Let’s give her a new situation [1, 1, 0] to think about. You can see her neural pathways light up.

Our neural network considering a new situation.

She has estimated 0.01. The correct answer is 0. So she was very close!

Pretty cool. Traditional computer programs can’t learn. But neural networks can learn and adapt to new situations. Just like the human mind!

How did I do it? I used the Python library matplotlib, which provides methods for drawing and animation. I created the glow effects using alpha transparency.

You can view my full source code here:

Thanks for reading!

If you enjoyed reading this article, please click the heart icon to ‘Recommend’.




Fundamentals and Latest Developments in #DeepLearning

Recommended from Medium

Hidden Markov Model — Evaluation Problem (Forward-backward Algorithm)

MobilenetSSD : A Machine Learning Model for Fast Object Detection

DATA 2040 mini assignment: Visualizing how a convolutional neural network learns

An Improved Canny Edge Detector and its Realization on FPGA

A feature of an investment machine learning system

AxGazeEstimation : A Machine Learning Model for Estimating Gaze

Generative vs. Discriminative Models in Machine Learning

Six Elements of Machine Learning

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Milo Spencer-Harper

Milo Spencer-Harper

Studied Economics at Oxford University. Founder of, acquired by PM at Facebook. Interested in machine learning.

More from Medium

Dive into the field of Artificial Intelligence in 10 weeks.

How Computer Vision Will Shapes The Next Generation of Agriculture

Face mask recognition tool


Machine Learning for Broke (with certification)