What is Perceptron: A BeginnersTutorial For Perceptron
Perceptron algorithm used in supervised machine learning for classification. There are two types of classification. One will classify the data by drawing a straight line called a linear binary classifier. Another will be cannot classify the data by drawing the straight line called a non-linear binary classifier.
Artificial Neuron
In Today’s world time is going fast with the same phase of invention too. The AI solution gives a new platform for machines to think like the human brain. The ANN plays a vital role here basically, it functions the same way how the biological neurons work for humans. To make it a simple context, ANN holds two or more input with weighted values and merge them with mathematical function to produce output. Let see how the biological neurons work.
Biological Neuron
The neuron is the most important function in our human brain. When we sense some activity from the outside the signal is passed to neurons. Once the signal is received from the neuron, produces the respective output. The output is received back to activity as a response.
Perceptron
The block diagram illustrates the sequence of input as X1, X2, …Xn with their weights as W1, W2…..Wn. Further, calculate the sum of the weights by applying W1*X1+W2*X2+…Wn*Xn. Finally passed the sum of the weights to the activation function. From the function produces the output.
Activation Function
The activation function is decision-making for neural networks. This function produces a binary output. That’s the reason it’s called a binary step function. The threshold value gets introduced here by validating the value from the weighted sum. If the value is > 0, then applied classification as 1 or True. If the value is < 0, then applied classification as 0 or False.
Bias
Bias allows you to shift the activation function by adding a constant (i.e. the given bias) to the input. Bias in Neural Networks can be thought of as analogous to the role of a constant in a linear function, whereby the line is effectively transposed by the constant value.
In a scenario with bias, the input to the activation function is ‘x’ times the connection weight ‘w0’ plus the bias times the connection weight for the bias ‘w1’. This has the effect of shifting the activation function by a constant amount (b * w1).
With all the explanations, I would explain in better understanding in the real-world scenario. Normally, We do prepare tea for our loved ones, especially in the morning.
Consider the example ‘Preparing Tea’ as the objective. Consider the example as the perceptron to prepare the good tea.
- The very first step will be heating the water and pour boiled water into the cup.
- Add the tea bag and sugar to get the perfect taste of aroma
- Finally, stir the tea and remove the teabag.
- If output gives good tea no change.
- If the output is bad, need to go backward propagation to change the quantity of sugar/water.
- Then check the output.
Here the input is treated as water and weights are considered as teabag & sugar. We need to adjust the weight accordingly, If there is an error occurred in the ouput. Say here as bad tea.
Hope you can correlate with the real-world example here.
BackPropagation
Back-propagation is the essence of neural net training. It is the method of fine-tuning the weights of a neural net based on the error rate obtained in the previous epoch (i.e., iteration). Proper tuning of the weights allows you to reduce error rates and to make the model reliable by increasing its generalization.
Perceptron Types
Perceptron algorithms can be divided into two types they are single layer perceptrons and multi-layer perceptrons.
In a single-layer perceptron’s neurons are organized in one layer whereas in a multilayer perceptron’s a group of neurons will be organized in multiple layers. Every single neuron present in the first layer will take the input signal and send a response to the neurons in the second layer and so on.
Python Implementation
In this section, we will implement the simple perceptron learning rule in Python to classify flowers in the Iris dataset.
For the following example, we will load the Iris data set from the UCI Machine Learning Repository and only focus on the two flower species Setosa and Versicolor. Furthermore, we will only use the two features sepal length and petal length for visualization purposes.
Hope this article gives you a better understanding of Perceptron.
See you soon on my next article :)