Published in

Deep Learning 101

# Video of a neural network learning

As part of my quest to learn about AI, I generated a video of a neural network learning.

Many of the examples on the Internet use matrices (grids of numbers) to represent a neural network. This method is favoured, because it is:

• mathematically equivalent to a neural network
• computationally faster

However, it’s difficult to understand what is happening. From a learning perspective, being able to visually see a neural network is hugely beneficial.

The video you are about to see, shows a neural network trying to solve this pattern. Can you work it out?

It’s the same problem I posed in my previous blog post. The trick is to notice that the third column is irrelevant, but the first two columns exhibit the behaviour of a XOR gate. If either the first column or the second column is 1, then the output is 1. However, if both columns are 0 or both columns are 1, then the output is 0.

So the correct answer is 0.

Our neural network will cycle through these 7 examples, 60,000 times. To speed up the video, I will only show you 13 of these cycles, pausing for a second on each frame. Why the number 13? It ensures the video lasts exactly as long as the music.

Each time she considers an example in the training set, you will see her think (you will see her neurons and her synaptic connections glow). She will then calculate the error (the difference between the output and the desired output). She will then propagate this error backwards, adjusting her synaptic connections.

Green synaptic connections represent positive weights (a signal flowing through this synapse will excite the next neuron to fire). Red synaptic connections represent negative weights (a signal flowing through this synapse will inhibit the next neuron from firing). Thicker synapses represent stronger connections (larger weights).

In the beginning, her synaptic weights are randomly assigned. Notice how some synapses are green (positive) and others are red (negative). If these synapses turn out to be beneficial in calculating the right answer, she will strengthen them over time. However, if they are unhelpful, these synapses will wither. It’s even possible for a synapse which was originally positive to become negative, and vice versa. An example of this, is the first synapse into the output neuron — early on in the video it turns from red to green. In the beginning her brain looks like this:

Did you notice that all her neurons are dark? This is because she isn’t currently thinking about anything. The numbers to the right of each neuron, represent the level of neural activity and vary between 0 and 1.

Ok. Now she is going to think about the pattern we saw earlier. Watch the video carefully to see her synapses grow thicker as she learns.

Did you notice how I slowed the video down at the beginning, by skipping only a small number of cycles? When I first shot the video, I didn’t do this. However, I realised that learning is subject to the ‘Law of diminishing returns’. The neural network changes more rapidly during the initial stage of training, which is why I slowed this bit down.

Now that she has learned about the pattern using the 7 examples in the training set, let’s examine her brain again. Do you see how she has strengthened some of her synapses, at the expense of others? For instance, do you remember how the third column in the training set is irrelevant in determining the answer? You can see she has discovered this, because the synapses coming out of her third input neuron have almost withered away, relative to the others.

Let’s give her a new situation [1, 1, 0] to think about. You can see her neural pathways light up.

She has estimated 0.01. The correct answer is 0. So she was very close!

Pretty cool. Traditional computer programs can’t learn. But neural networks can learn and adapt to new situations. Just like the human mind!

How did I do it? I used the Python library matplotlib, which provides methods for drawing and animation. I created the glow effects using alpha transparency.

You can view my full source code here:

--

--

--

## More from Deep Learning 101

Fundamentals and Latest Developments in #DeepLearning

## Milo Spencer-Harper

Studied Economics at Oxford University. Founder of www.magimetrics.com, acquired by www.socialstudies.io. PM at Facebook. Interested in machine learning.