Introduction to Neural Networks

V Nohitha Reddy
4 min readDec 4, 2019

--

Neural Networks is the heart of Deep learning. Neural networks roughly resemble the working process of our brain. Let us look into an classification example for better understanding. Example: “Credit card approval”. Consider two parameters age and income level. If age is 18 or above and income level is verified so that you can repay the debt then credit card is approved else not approved.

If both the parameters satisfy the condition then credit card is approved and vice versa. So, when given an input , should it be approved or rejected?

We take previous data and plot a graph to find the solution by prediction. Assume the previous data graph as shown below where red region below line is rejected region and blue region above line is approved region.

Credit card approval and rejection of previous data

There are few points which are incorrectly classified and we ignore them. We humans can look at the graph and draw a line to classify, but how does computers do this? We have certain algorithms for it, lets get in to it.

The general equation for that line drawn is w1x1 + w2x2 + b = 0. In vector notation it can be written as Wx + b = 0, where W = (w1, w2) and x = (x1,x2) here weights, W and inputs, x are vectors and b is bias. Consider y as expected solution and y^ as predicted solution. Here, y can be 1 or 0 that means approved or rejected. y^ is 1 if Wx + b ≥0 (approved) and is 0 if Wx + b < 0(rejected). Aim of this algorithm is to predict y^ nearer to y.

Higher dimensions

When we have ’3’ input or parameter then we take 3 dimensions (x,y,z), which represents a 2D plane separating two regions as shown in below figure. The equation would now be w1x1 + w2x2 + w3x3 +b =0. It is difficult to represent when there are more than 3 inputs. Consider, if we have ‘n’ inputs then we take ‘n’ dimensions which represent (n-1)dimensional plane and the equation would now be w1x1 + w2x2 + wnxn +b = 0.

3D(3 inputs) which represent 2D plane

Perceptron

Perceptron is defined as building block of neural network and general equal is represented as shown below. Keep the data in node, where node is named as ‘y’, the inputs for the node are x1 and x2 in this example age and income level which are also nodes. The obtained output can be either 0 (false) or 1 (true).

General form of Perceptron

The bias is given to the neuron as an input in two ways as shown below. Consider linear equation of line be 2x1 + x2 -10 =0. Here 2 and 1 are labels of the inputs which are respectively weights and -10 is bias. Bias does not have weight but if it take ‘0’ as input for bias then the equation becomes product of inputs and weights. So, bias input is taken as 1.

Representation of bias in two ways

The x1 to xn are the inputs and w1 to wn are their respective weights. Here 1 is input for bias and w0 is bias. The weighted sum calculates the linear equation of the line and the step function(implicit function) computes and gives the output either as 1 or 0 as shown in below figure.

Why it is called “Neural Networks”?

Perceptrons are similar to neurons in human brain, here we have 3 inputs given to perceptron which compute and give either 1 or 0. Similarly, dendrites take inputs which are nerve impulses given to neuron which compute and output is given through axon terminal.The output obtained is taken as input for another one. The structure of neuron and perceptron as shown below.

Neuron and Perceptron respectively

--

--