Artificial Neural Network-A Brief Introduction

The brain is the fundamental part in the human body. It is the biological neural network which receives the inputs in the form of signals and processes it and send out the output signals. The fundamental unit of the brain is the Neuron. The AI Expert Maureen Caudill defines ANN as “a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.”

Brain consists of 200 billion of neurons and neuron is formed from 4 basic parts as Dendrites, Soma, Axon, and Synapses. The neuron collect signals from Dendrites, and the Soma cells sums up all the signals collected, and when the summation reaches the threshold the signal pass through the axon to the other neurons. The Synapses indicated the strength of the interconnection in between the neurons.

Similar to the brain, the Artificial Neural Network, imitates this biological Neural Network of human body. The first ANN was created so many years before by the neurophysiologists Warren McCulloch and the logician Walter Pits in 1943. They created a computational model for Neural Networks for the threshold logic which is the logic based on the mathematics and algorithms. But due to the inefficient technologies available at that time, this idea didn’t come up as a success one. The ANN formed from the artificial neurons made up of Silicon and wires which imitates the neurons and the interconnection which are formed from coefficients (weights). The knowledge of an ANN is stored within inter-neuron connection strengths known as synaptic weights. The ANN is strongly interconnected with each other to make solutions for specific problems and complications and it is highly useful in the sectors of Pattern recognition, data classification, clustering etc. They derive data and information from complicated sources which are much difficult for the machines and human to get the data and they also used to find the complex patterns find in the complex sources of data.

Normally computer programs are defined commands which always execute according to the commanding of the programmer. If the programmer doesn’t know how to solve the problem at some situation, then the solving capability of the program will be restricted. But ANN as similar to brain, learn through examples and experiences not from already defined commands through programs. It manually learns from the examples and experiences it-self and then apply the learnings on tests. So, computers will be more efficient if they know how to solve the problems which are not known to the human. Through this, ANN helps in the pattern recognition and data classification situations.

The structure of an artificial neuron can be explained as below

Structure of an Artificial Neuron

In the Neural Network, the neurons are arranged in to multiple layers. Each of the layers are connected with the other layers on both of their sides in which the layers on one side engaged in receiving input signals which are needed by the network in order to learn or process and the layers on the other side works with output and responses for the information. In between these two layers, there are hidden units. The interconnection in between the hidden units and the output unit with the all other units in the layers are known as weight. The weight indicates the dependency/connection strength in between the units.

For each processor in a layer of ANN, every input is multiplied by an originally recognized weight, and it will create the internal value of the operation. This value is further altered by an initially generated threshold value and sent to an activation function to map its output. Then the output of that function is sent as the input for another layer, or as the final response of a network if the layer is the last. The weights and the threshold values are most usually improved to produce the correct and most accurate value.

Training for ANN

As mentioned above, the ANN learns through past experience and learnings. So there should be training for the ANN. Training for ANN carry out by standardizing all of the “weights” using two techniques known as forward propagation and back propagation. In Forward Propagation, sample weights are input to the ANN through the inputs and the respected sample outputs are recorded. Here, the inputs are fed and outputs for the inputs are received. In the Back Propagation as the name suggest working from the output units through the hidden units to the input units, considering the error margin of the outputs received in each layers, the inputs are adjusted in order to reduce the margin of the error. The trainer for the ANN, have the already calculated output values for the inputs. So, after receiving the output for the inputs, the trainer will analyse whether the actual output and the output produced by the ANN are the same. If not, an error value is calculated and sent back in to the ANN system. At each of the layers, the error value is investigated continuously and used to modify the threshold and weights for the subsequent input. Through this the error margin will be reduce gradually and the ANN will learns to analyse the values and produce the accurate results for the inputs.

There are two Artificial Neural Network Topologies as Feed Back ANN and Feed Forward ANN. The Feed Forward ANN in which when a unit sends signal to another unit, then the sender unit won’t receive any signals from the receiving unit backwardly. This type of ANN topology will be used in the classification, pattern recognition etc. in the Feed Back ANN topology, the feed back flow will be allowed.

As mentioned earlier the weights in ANN indicate the strength of the interconnection in between the units in ANN. When the input is fed in to the network, each of them are multiplied by the corresponding weights and are sent to next unit layer. Inside the artificial neuron all the weighted inputs are summed up. And the value of this sum will be from zero to infinity. To limit the response to reach a preferred value, a value is set up called as threshold value.

In order to get desired output there is a function called Activation function is used. There are linear and non-linear activation functions present. Binary, sigmoidal (linear) and tan hyperbolic sigmoidal functions(nonlinear) are the commonly used activation functions.

As the conclusion, it can be derive that the Artificial Neural Networks going to be a leader in the present and future world which convert the computers in to more human and even also more the human!!!