Neurons — The Nuts and Bolts of our Intelligence

Sterin Thalonikkara Jose
Analytics Vidhya
Published in
7 min readSep 19, 2020
Neurons — Image by Gerd Altman from Pixabay

The focus of this article is the physiological aspect of our intelligence. The interconnections of cells in the brain, and their complexities constitute in the behavior of an organism, or its responses to its environment. Our brains are an ensemble of a group of special cells known as Neurons. A single neuron constitutes the basic information processing building block. Neurons are millionfold, and are spread across the human body to collect inputs from the sensory organs, decide whether the information is worthy enough to be passed on to the next level, carry that information all the way to the brain, find whether this information can be related to any of the brain’s previous experiences, build relations of new information to existing ones, and store it for later usage, etc.

The knowledge of the operating mechanism of a neuron is of importance to Artificial Intelligence as it fortifies the root mathematical model of human consciousness, and intelligence. Information processing has had its own story to say in history, as seen in the technologies of various times. When it comes to the realm of AI, it is intuitive and indispensable to think on the lines of the way, our brain is modelled and constructed. The functions of the nervous system are very many, like the sensory and motor neurons constituting a feedback loop, guiding our actions forming a closed loop Control System.

Control Systems

The study of Control Systems has been part of the design and implementation of any engineered machinery that is automated. The idea is to regulate the behavior of a system in the light of the variables of environment it is part of. Our intelligence itself maybe topographically seen as belonging to a very complicated Control System.

A Control System is a loopback system that adjusts its output based on changes in environmental variables. A simple example is an Air Conditioner Thermostat, where the temperature required is preset, and the Air Conditioner compressor is switched ON or OFF, based on the ambient temperature reported by a temperature sensor, thereby maintaining a constant preset. Many of the current appliances are much more sophisticated in their functions, than an Air Conditioner Thermostat. However, their ‘intelligence’ is limited to finetuning or generating desired outputs based on the inputs, initial states, ambient conditions, and output generation criteria.

Basic Control System — Image Source

Functionally, a neuron is an information processing unit. Each neuron has independent decision-making capability — like functions in a program (a neuron is an integrated functional unit). They are listening to events using dendrites, decide whether to act (and how) on the reported events, and if yes alerts the next level neurons in the network. One major difference is — all neurons work in parallel — not sequentially like a(most) computer program(s).

Another capability of neuron is to process ‘analog’ information. Inputs are continuous in nature; there are numerous inputs from other neurons and based on the intensity of inputs, neurons fire their outputs to next level neurons via axons. Some of the sensory neurons are directly connected to brain via the spinal cord for faster information transfer so the decision from the brain can be executed by different organs without any delay. Parallel processing of information by these neurons makes a whole lot difference in the biological organism compared to machines.

Parts of a Neuron — Image Source

Working Mechanism of a Neuron

A neuron has three main parts.

  • The soma or cell body directs all activities of the neuron.
  • Dendrites, the fibers that extend out from the cell body and receive messages from other nerve cells.
  • Axon, long single fiber that transmit messages from the cell body to the dendrites of other neurons or to other body tissues, such as muscles.

Messages travel along a single neuron as electrical impulses, but messages among neurons travel differently. The transfer of information from neuron to neuron takes place through the release of chemical substances into the gap between the axons of one and the dendrites of other neurons. These chemicals are called neurotransmitters, and the process is called neurotransmission. The gap between an axon and a dendrite is called a synapse. The size of a gap is about 20 nanometers. Thus, the terminal button of an axon of one neuron, fires a dendrite of a second onto its activation potential.

Neurons take information input via these synapses. Based on the frequency of information transfer, some neurons are ‘permanently’ connected to other neurons creating a network. As the frequency of information reduces, binding of the neurons also become weak; no information transfer between neurons would make the neurons ‘disconnected’ from each other. Hence the network is ‘formed’ or ‘detached’ by the frequency of information transfer and firing of neurons. We do use the phrase ‘muscle memory’ sometimes, in cases of repeated activities.

Firing of Neurons — Image Source

The Perceptron

A perceptron is the simplest computational model of a biological neuron. It is perhaps the first AI machine ever conceived. The perceptron algorithm was invented in 1958 at the Cornell Aeronautical Laboratory by Frank Rosenblatt, even though it was originally intended to be implemented in hardware. A perceptron is the implementation of a binary classifier, a threshold function based logical comparator.

Perceptron Algorithm — Image Source

where w.x represents the weighted summation of the product of weights of individual feature channels (feature channels are modelled as components of the input spatial vector x), and the input spatial vector components, or the values of features themselves, and b is the offset bias. The value of f(x), which forms the decision boundary (the straight line in the following example), is used to classify the test variable x (the size of the animal in the example,) as either belonging to ‘inside’ or ‘outside’ of a decision region(domesticable or non-domesticable in the example). If b is negative, then the weighted combination of inputs must produce a positive value greater than |b|(magnitude of b) to push the classifier neuron over the decision boundary. Spatially, the bias alters the position (though not the orientation) of the decision boundary.

Binary Classifier — Image Source

The binary classifier function (the ‘domesticability’ of an animal based on the size of an animal) is represented by the straight line in the graph. The feature channel (only one in this case) is the size of the animal. The red ones are non-domesticable and the blue ones are, according to the classifier function definition.

Neural Networks — Biological vs Deep Learning

AI heavily uses Neural Networks to perform learning behaviors. The concept is to use new inputs with certain weights(coefficients) as per trained models of neuron layers to match previously learned information against new data (Informed AI). Each node (a perceptron) in an NN imbues a weight coefficient multiplier on its inputs. Even though these concepts are inherited from human brain, Artificial Neural Networks do not behave like neural networks in the brain. All ‘perceptrons’ in the Neural network layers of an AI algorithm need to execute sequentially to identify the information.

Unlike sensory biological neurons(neurons that perceive the five senses), in the brain one neuron is connected to many other neurons three dimensionally, and any information propagates through this huge network of biological neurons, firing all related neurons in the path. Advantage of this approach is to only fire necessary neurons whose behavior maps with input data. This mesh topology permits high levels of connectivity. Our brain is said to have about a hundred billion neurons.

The emergent behavior of the billionfold neurotic centers is a state of ‘Awareness’.

Firing of multiple neurons in the brain — Image Source

From the perspective of a neuron, each neuron is a semi-autonomous entity that can process information and take decisions in its capacity without guidance or foreign interference. This also means that there are billions of tiny self-sufficient interconnected parallel processors that can take the form of a huge mesh, and take decisions based on experiences and fire and color our imaginations.

Can the parallel execution of biological neurons and selective triggering of neurons in related paths make all the difference between brain and a computer?

To answer that question, we need to understand the working of human brain itself. We will go over the details of human brain and its working next week.

Next week: Brain and Neuroscience

Previous week: The Adaptive-Habitual Intelligence

First week: Can Machines Think?

--

--

Sterin Thalonikkara Jose
Analytics Vidhya

My friend Roshan Menon and I are researching the subject “Thinking Machines” and possibilities to make one. We would like to pen down our thoughts here.