Basics of Deep Learning explained in 3 minutes

A Practical Step-by-Step explanation

Imen Bouzidi
Analytics Vidhya
3 min readDec 23, 2020

--

Deep learning represents a sub-field of machine learning relying on an algorithm similar both in structure and function to the human brain called artificial neural networks. It simulates the functions of human neurons in transmitting and processing data to turn it into useful information.

The basic form of neural network is a Perceptron which is a single layer neural network model designed for supervised binary classification invented in 1957 by Frank Rosenblatt [1].

Perceptron:

The hidden layer of this model is composed of one neuron (computational unit), therefore its simplicity will help us understand the basics of the neural network’s functionality. The perceptron’s structure in forward propagation is presented below:

Figure simulating the work of a perceptron
Image created by author

The predicted output from the vector of input is:

The algorithm of the perceptron goes by two main phases first the forward propagation then the backpropagation.

1/ Steps of forward propagation:

  • Initialize the weights (most of the time random) and the input layer which takes a numerical representation of data.
  • Calculate the weighted sum of inputs.
  • Apply the activation function (which can be a Rectified Linear Unit, Sigmoid, tanh or Softmax…) on the canlculated sum and the bias, this gives us the predicted output.

2/ Steps of backpropagation:

  • Calculate and optimize the loss function (it can be Mean Squared Error Loss, Mean Absolute Error Loss, Binary Cross-Entropy…) between the predicted output and the desired output
  • Update the weights.

Those two steps are repeated until the minimisation of the error by updating the weights in every iteration.

This simplified diagram summarize the learning process of a Perceptron:

Image created by author

As a conclusion, an artificial neural network is a composition of multiple perceptrons connected and operating on different activation functions.

The number of hidden layers indicates the depth of the network. Therefore, a deep neural network is composed of more than one hidden layer as follow:

Image created by author

References:

[1] Marvin Papert Minsky and Seymour.Perceptrons, volume 1. M.I.T. Press, 1969.

--

--

Imen Bouzidi
Analytics Vidhya

I’m an engineer in statistics and data analysis. I love manipulating data to extract valuable information. I’m interested in health care.