DATA SCIENCE THEORY | NEURAL NETWORKS | INTRODUCTION

Neural Networks-Part(1): Introduction to Neuron and Single Neuron Neural Network

From a biological to an artificial neural network

Aamir Ahmad Ansari
Low Code for Data Science

--

What does this image say? Connections, networks, complexity, it does convey all of that, doesn't it?

In this series, we’ll learn something similar: Neural Networks. Neural Nets originally reside in the human brain. They are very complex and powerful networks of trillions of neurons connected to each other with synapses in our brain, of which the above image represents a resemblance. We used two new terms here: neuron and synapse. Neurons are the fundamental units of a neural network and are often referred to as information messengers, as they send to and receive signals from other neurons. These signals are received by the cell nucleus through dendrites, and are processed and understood in the cell nucleus. The signals are then carried towards the synapse by the axon. In the last part, synapses establish a connection to other neutrons and complete the signal transformation.

Figure 1: Neuron Anatomy.

The above illustration depicts the structure of a neuron and provides information on its anatomy — I hope that what was written above makes more sense now. An obvious question is, how does it help us? The function of a neuron is to learn from human behaviour and make decisions based on that, be it sensory functions like smelling or tasting, or motor functions of the human body. The objective is to learn, improve and decide. Now, that’s something useful for us, aren’t we trying to make our machine learn from the data and predict outcomes for us? Well, this whole mechanism is similar to how our own brain works, and today we will understand how we can replicate the elegant and complex model of our brain according to our needs.

We’ll discuss the following:

  1. Interpretation of a Neuron
  2. Building a Single Neuron Neural Network

Note. An implementation with Keras will be discussed in future articles.

1. Interpretation of a Neuron

In Figure 1, we saw the anatomy of a nerve cell but how to convert it into a representation that we can work with?

Figure 2: Simple representation of the neuron for computation use cases.

Figure 2 represents a simple interpretation of the neuron for our computation use cases. Dendrites are transformed to be the inputs. The cell body, which many people will refer to as a neuron in data science, is the main hub of computation for us. It performs the following functions:

a) It multiplies each input with a specific weight (which are adjusted by minimizing the loss function) and sums all the weighted inputs.

b) These weighted inputs are then passed to an activation function; a data transformation function. The one used above is the Sigmoid function, we’ll discuss this in a later article.

Then, the output is represented by the axon. The bigger picture is the same as for machine learning algorithms, as they usually entail 3 main steps: Data Input, Learning and Decision Making. I excluded EDA, data cleaning, transformation, and feature engineering on purpose, just to give an idea of similarity. Though the bigger picture may be the same, the functioning of both concepts is a little different.

Note. What your model trained on one machine learning algorithm can do, a single neuron can do.

2. Building a Single Neuron Neural Network

Now that we are clear with our interpretation of a neuron, we shall look into more details how a neural network is built and works. We’ll learn this using the example of a logistic regression like we did in Logistic Regression in Machine Learning. However, this time we will build a neural network made of a single neuron to obtain predictions.

Let the predictor matrix be X of n predictors and m observations:

and the weight matrix beta, with n weights:

The first step is to feed an input to a neuron. You can provide multiple inputs or a single input to a neuron. We will provide a 2D array of n features and m observations. In future articles, we will build complex neural networks with many layers of neurons and many neurons in each layer.

Figure 3: Feeding an Input to a Neuron.

Awesome, we have given our input! Now, the next step will be done inside the neuron, where we combine the weights with inputs and add a bias term to them.

At first, we choose weights arbitrarily and then, as we minimize our loss, we adjust the weights. This is one of the functions of our neuron. The other is to transform the data. We do that by using an activation function, and since we are using a logistic regression example, we would like our output to be in the range of [0,1]. So what function did we use back then for that? The Sigmoid function. We will pass the weighted input to the Sigmoid function, which delivers our final output.

Note. Minimizing the error is an iterative process. Hence, the weights will be updated in each iteration.

The output can be interpreted as a probability and by choosing the threshold we can get our predictions.

That is it for today, thank you very much!

--

--

Aamir Ahmad Ansari
Low Code for Data Science

Data Scientist @RoadzenTechnologies | Upcoming MSc AI student at University of Southampton (2024-2025) | Sharing as Learning