Perceptron: Basics of a Simple Brain-like System

Vipin
4 min readJan 2, 2024

Neural networks are the backbone of modern artificial intelligence, mimicking the structure and function of the human brain. At the heart of these networks lies the perceptron, a fundamental building block that forms the basis for more complex neural architectures. In this , we’ll delve into the workings of the perceptron, exploring its components, functions, and learning process.

Unpacking the Perceptron’s Blueprint

  1. What is a perceptron

Fundamentally, A perceptron is a simple type of neural network that has only one neuron. This neuron processes incoming data, generating an output based on a predefined set of parameters.

2. Weighted Sum and Activation Function

The neuron’s duties encompass two primary functions: computing the weighted sum of inputs and employing an activation function. The weighted sum, a linear combination, results from multiplying each input by its corresponding weight, summing these products, and introducing a bias term:

Expressed in Python, this takes the form:

import numpy as np

z = np.dot(w.T, X) + b

Here, X denotes the input vector, w stands for the weight vector, b symbolizes the bias (y-intercept), and z emerges as the outcome.

3. Bias in Perceptron

The infusion of a bias empowers the perceptron to align more closely with prediction data. When there is no bias, the line always goes through the origin point (0,0), which can result in a less accurate fit. Within neural networks, the bias metamorphoses into an additional weight, subject to adjustments by the neuron during training to curtail the cost function.

In the context of a line equation (y=mx+b), the bias signifies a point on the y-axis.

4.Activating the Non-linearity

Both artificial and biological neural networks usher in an activation function, preventing the output from being a mere echo of the input. Activation occurs when the input surpasses a predetermined threshold, injecting non-linearity into the network.

Decoding the Learning Ritual of Perceptron

The perceptron embarks on a journey of trial and error, fine-tuning its weights to minimize predictive inaccuracies. The learning choreography unfolds through these stages:

  1. Feedforward Process: The neuron calculates the weighted sum and applies the activation function to make a prediction
  1. Error Calculation: The output prediction is compared with the correct label to calculate the error
  1. Weight Update: The weights are adjusted based on the error. If the prediction is too high, the weights are modified to make a lower prediction next time, and vice versa.
  2. Repeat: Steps 1–3 are repeated iteratively, and the neuron continues updating weights to improve predictions until the error is close to zero.

This learning logic allows the perceptron to adapt to different patterns and make accurate predictions over time. Once the training is complete, the optimized weights are saved for future use in scenarios where the outcome is unknown.

Can just one brain cell solve hard problems?

The single perceptron works fine because our data was linearly separable. This means the training data can be separated by a straight line. But life isn’t always that simple. What happens when we have a more complex dataset that cannot be separated by a straight line.

In a nonlinear dataset, a single straight line cannot separate the training data. A network with two perceptrons can produce two lines and help separate the data further in this example

In conclusion, the perceptron serves as a foundational element in the world of neural networks, showcasing the iterative learning process that enables machines to make decisions and predictions based on input data. Understanding the intricacies of the perceptron is crucial for grasping the broader concepts of artificial intelligence and machine learning.

--

--

Vipin

AI enthusiast and Machine Learning Engineer 🚀⚙️. Sharing thoughts and lessons learned!