Neuron explained using simple algebra

Parminder Singh
Chingu
Published in
3 min readJan 15, 2017

This article might make them easier to understand for you

Neural networks are solely based on ‘Transformations’. It means that given a data you can convert it into any other datatype, popular method is by multiplying it with a weight and adding a bias.

The equation would look like:

Y = ( X * W ) + B

E.g. If you are given 2 as input and you want to get 5 as output, you can multiply 2 with 2 to get close to the expected output. This number 2 can be called the weight that gives the correct answer. But it isn’t correct answer as, 2 * 2 is 4, not 5… That is where we need a Bias. Here if we take it as 1, this completes out equation.
2 * 2 + 1 = 5
But in most problems, we don’t have the value of weights and bias. Neurons have to find the optimal values using the data they are given. We will use this equation for training our neuron!

So, lets see how neuron will work:

We give it an input, for this situation it will be 2 and expected output is 5. We have just 1 neuron at the moment so we won’t need any matrix or vector here.
For initialising W and B, give a random value to weight and set bias to zero, for this example let’s assume we set Weight as 1.

2 * 1 + 0 = 2 which is not equal to 5

The neuron has wrong weight and bias. What it needs to do now is to optimize the values so we get an acceptable answer.

This procedure of giving/feeding input to a neural network, is called “Feed forward”. Now we need to optimize weight and bias so that the neuron works correctly.

Let’s assume that we have a function for that:

F(Wrong Weight, Wrong Bias, Input, Correct Output)
gives us (Correct Weight, Correct Bias)

What happened in this function?

This function will try to make output from the current weight as close as possible to the correct output. Another thing it does is to keep the value of bias low and near zero, so that most of transformation is done by the Weight during the multiplication. There are many optimizer algorithms like Gradient Descent and Adagrad Optimizer that are used in practice.

This function might give us 2 as W and 1 as B.
2 * 2 + 1 = 5!

Yay! Our little neuron just corrected itself, this step is called “Back Propagation”.
This neuron when passed any other value will use the W and B, and will be able to solve many more problems!

I might go in-depth in an article in future but here is a trailer to give an idea of how neural network layers looks ;)

Layer[1 x No of Neurons in layer] = (Input[1 x No of Inputs] * Weight[No of Inputs x No Of Neurons in layer]) + Bias[No of Neurons in layer]

Looks similar to what you learned here right? So start exploring!

If you liked the article please Recommend, Follow and Share! If you have some suggestions leave a response and I will try to make them true!

--

--