Implement Back Propagation in Neural Networks

Deepak Battini
Coinmonks

--

When building neural networks, there are several steps to take. Perhaps the two most important steps are implementing forward and backward propagation. Both these terms sound really heavy and are always scary to beginners. The absolute truth is that these techniques can be properly understood if they are broken down into their individual steps. In this tutorial, we will focus on backpropagation and the intuition behind every step of it.

What is Back Propagation?

This is simply a technique in implementing neural networks that allow us to calculate the gradient of parameters in order to perform gradient descent and minimize our cost function. Numerous scholars have described back propagation as arguably the most mathematically intensive part of a neural network. Relax though, as we will completely decipher every part of back propagation in this tutorial.

Implementing Back Propagation

Assuming a simple two-layer neural network — one hidden layer and one output layer. We can perform back propagation as follows

Initialize the weight and bias to be used for the neural network: This involves randomly initializing the weights and biases of the neural networks. The gradient of these parameters will be obtained from the backward propagation and used to update gradient…

--

--

Deepak Battini
Coinmonks

Programmer and founder of blazorly.com. passionate open-source contributor, loves to combine cutting-edge tech expertise.