Understanding Neural Networks: A Simple Guide to Forward Pass and Backpropagation Using ReLU

Mohamed Z Elbasheer
4 min readAug 12, 2024

--

Neural networks are at the heart of many exciting technologies today, from voice recognition systems to self-driving cars. But how do they actually work? In this blog, we’ll break down the inner workings of a neural network by walking through a simple example, using the ReLU activation function to explain the forward pass and backpropagation.

What is a Neural Network?

A neural network is a computational model inspired by the way our brains work. It consists of layers of neurons (also known as nodes), where each neuron takes in some input, processes it, and passes it on to the next layer. The output from the last layer is the network’s prediction.

In this example, we’ll use a single-layer neural network, which is the simplest form of a neural network.

Our Example Setup

Let’s start with a simple example:

  • Input (x): 9
  • True Output (y_true): 1
  • Initial Weight (w): 2
  • Initial Bias (b): 3

Our goal is to adjust the weight and bias so that the network’s prediction gets closer to the true output ( y_true or y, and y_pred or ypred)

Step 1: Forward Pass

The forward pass is the process of moving from the input to the output. Here’s how it works:

1.1 Calculate the Linear Combination

First, we calculate a weighted sum of the input plus the bias, often referred to as z:

1.2 Apply the ReLU Activation Function

Next, we apply the ReLU activation function to z. ReLU stands for Rectified Linear Unit and is defined as:

So, the predicted output y_pred is 21. But our true output y_true is 1, so we’ve got some work to do to make our prediction better. Note, we could have used the Segmoid or Tanh, which actually got a perfect predition, but because we want to see how this nueral network works, we elect to use the Relu.

Step 2: Calculate the Loss

To measure how far off our prediction is from the true output, we calculate the loss. A common choice is the Mean Squared Error (MSE):

This loss value tells us that our prediction is quite far from the true value. The next step is to adjust the weight and bias to reduce this loss.

Step 3: Backpropagation

Backpropagation is how the network learns. It involves calculating the gradient (or slope) of the loss function with respect to each parameter (the weight and bias) and updating them to reduce the loss.

3.1 Calculate Gradients

Let’s find the gradient of the loss with respect to the weight w and bias b.

3.2 Update the Weight and Bias

Now, we update the weight and bias using a learning rate α=0.01 :

These new values for the weight and bias will give us a better prediction in the next forward pass, reducing the loss and bringing the network closer to the true output.

Conclusion

This example illustrates the basic steps of how a neural network with a single layer learns from data using the ReLU activation function. By repeatedly performing the forward pass and backpropagation, the network gradually improves its predictions.

Understanding these concepts is crucial as they form the foundation of more complex neural networks, including those used in deep learning. Hopefully, this breakdown makes neural networks a bit less mysterious!

--

--

Mohamed Z Elbasheer

I earned my Doctor of Engineering in Cybersecurity Analytics from The George Washington University.