Feed Forward Neural Network From Scratch

Namita
3 min readJun 15, 2023

--

To get started head first with the code, a certain clarity behind the scenes is required which I will be happy to provide in as simple terms as possible.

Let’s create our own data with Numpy.

# import the library
import numpy as np

# create data
x = np.array([
[1, 0 , 1 , 0],
[1, 0 , 1, 1],
[0, 1 , 0 , 1]
])
y = np.array([ [1 , 1 , 0] ])

If you look closely you will see that this is a binary data, and we will use a sigmoid activation function for this kind of a dataset. The activation function will ensure whether or not a neuron should be activated depending upon the weighted sum and bias, it’s purpose is to add non linearity to the data.

Now that we have the data, we just need to fit it into a neural network and get predictions. Sounds simple, right? Well it is.

Figure 1 ( a )

One golden rule that we must always remember is that the number of inputs in a neural network are always equal to the number of columns in the dataset. There can be any number of hidden neurons in the hidden layer and each will have a bias term attached with it.

#Input Neurons = Always equal to the numberof columns
inputNeurons = x.shape[1]

#The number of hidden neurons can be self chosen and I have chosen the number 3
hiddenNeurons = 3

# Number of output neurons
outputNeurons = 1

In Figure 1 ( a ), there are 4 inputs corresponding to the four columns in our dataset. To propagate ahead from these inputs to the hidden neurons we need to calculate a weighted sum of all the inputs along with a bias term. Bias makes sure that the dot product of the inputs with the weights is not zero.

z = Σ X.W + b

Let’s create our weights and biases for the hidden neurons.

#Hidden Weights
wtsHidden = np.random.uniform( size = (inputNeurons , hiddenNeurons ) )

#Bias term for hidden neurons
biasHidden = np.random.uniform ( size = ( 1 , hiddenNeurons) )

z will then get passed to the hidden neurons and an activation function will be applied to it which will form the hidden layer.

#Calculate the weighted sum of inputs
z = np.dot( x , wtsHidden) + biasHidden

We will be passing the weighted sum through sigmoid activation function, so let’s write the code for the function as well.

# Sigmoid activation function
def sigmoid( z ):
return 1 / (1 + np.exp(-z))

#Hidden Layer
hiddenLayer = sigmoid(z)

Now, the inputs have reached the hidden layer with their respective weights and biases. To propagate ahead in the neural network we will again calculate the weighted sum of hidden layer neurons along with a bias for reaching the output layer.

# Output Weights
wtsOutput = np.random.uniform ( size=(hiddenNeurons , outputNeurons ) )

# Output Bias
biasOutput = np.random.uniform( size=( 1, outputNeurons ) )

#Output Layer
outputLayer = np.dot (hiddenLayer , wtsOutput) + biasOutput

After performing a dot product on the hidden layer with the weights for the output neurons along with bias, we have to pass the result through an activation function again to get the final predictions.

result = sigmoid ( outputLayer )

So, this was a simple approach to understand feed forward neural network.

--

--

Namita

Tech Enthusiast , keen observer, consistent learner and a data science student.