Deep Neural Network — (Explain Like I’m Five)

Deep Neural Network (and more generally machine learning) is the most highly sought after technology skill, as it is going to change our lives more than what we can imagine. But learning Deep Neural Network can become overwhelming for beginners. I have attempted below to explain deep neural network in the easiest way with an analogy.

I had a favorite cook, who used to prepare two dishes. He was my favorite cook, as he knew what dish to prepare to make me happy.

But he kept his rules secret. He used to see at the inputs (on the left side of the image) such as sunny, rainy or guests and decide to prepare pizza or cake (shown on the right side).

Apparently his secret rules were,
1. Prepare pizza on rainy days
2. Cake on sunny days
3. Cake if guests are visiting, even on rainy days

But one day he left the job!

I told him that I have high expectations from him.
The smart cook asked for few days of training time and my feedback every day during the training time.

I agreed!

On day 1, he prepared a pizza on a sunny day.

I shown my displeasure by saying that I would have preferred something sweet. He took a note of it.

On day 2, he prepared a cake when it was raining outside.

I told him that pizza would have suited my mood Today. He took a note of preparing pizza when it will be raining.

On day 3, he prepared a pizza as it was raining outside. My friends were visiting at my house.

I told the cook that my friends like cake more than the pizza, whatever is the outside condition. He noted of the new rule.

On day 4, the cook declared that he is trained now and can correctly decide the dish now.

Neural networks are similar to the smart cook.

Neural Networks are good at duplicating any behavior through trial and error.

Neural network communicates using numbers. If input is a true, it will be called as 1 and 0 if false.
Blue circles below are called as nodes. The nodes are connected by neural connections with a weight. More the weight, stronger is a the connection. Think of weight like a number of votes that the node is voting for. The vote can be positive or negative. The votes at the final layer are added to decide the choice.

Day 1 trial is the iteration 1 of training. Below is the connection assigned randomly. Since raining and guest inputs are zero, their nodes do not contribute in the voting process.

As feedback is not okay, the neural network corrects the weight (vote) as,

On iteration 2, neural network assumes the following,

After learning from the feedback, the neural network corrects the weights (votes) as below,

Iteration 3: The neural network has already learnt about pizza when raining so it votes for pizza.

But the cake was expected for friends so the neural network modifies the guest related votes to give more votes to cake than the pizza. The layer on the right does sum of the votes to derive at cake as the expected output.

After the training phase, the neural network is a combination of all the learnt connections. Connections with 0 weight (vote) are omitted in the diagram as they don’t contribute in the voting process. Anytime a input is 1 at input (say sunny day), the node will generate it’s output (cake).

Let’s go DEEP

My choices over time have become complex. My changed preferences are as below,

• If there is a party at home with friends and parents then I want a cake.
• Sometime, I do cost cutting when lot of guests at home and too many veggies in fridge. I prefer pizza in those times.
• If I am playing cricket on a bright day with my friends, then I prefer a cake.
• While I watch football on TV, I like to eat pizza. I watch football match when I have friends at home and can’t go out to play because of rain. Also parents should not be at home.
• If there is a party at home and I am low on budget, then I will have pizza instead of cake.
• I do study when it’s bright outside and no-one at home. I eat pizza on these days.

We can work with these conditions similarly to the above story and derive at the below trained ‘DEEP’ neural network,

The red line indicates negative weight and usually it is -1. Exception is -3 and -2 weights shown above. All the green lines are carrying the positive weight of +1. These weights (votes) are added at the final layer to derive at the choice of dish.

The above neural network is called as Deep Neural network as, it has a input layer on the left, output layer on the right, and (at least) 1 hidden layer in the middle. In complex modeling, the hidden layer numbers may increase.

This was a binary classification problem, as the output had only two values (cake, pizza). Other examples of binary classification can be yes/no, present/absent, good/ defective, cat/no cat. We can extend the above problem to multiple choices of dishes (say cake, pizza, burger, sandwich), then it becomes a multiclass classification problem.

If you have liked the blog, please clap for me. Thanks!