AI — Taking inspiration from nature

Nishant Rastogi
Firing Neurons
Published in
6 min readJul 29, 2019

Throughout the evolution of AI, there have been numerous attempts to find inspiration in the biological basis of intelligence. Not only decades of research has been spent in perfecting mathematical calculations to create powerful algorithms, time and again researchers have looked into nature for inspiration to develop next generation of intelligent presence on the planet. These explorations of the way in which intelligence has evolved in nature have produced phenomenal results. This post covers some of the very popular nature inspired AI algorithms.

ARTIFICIAL NEURAL NETWORKS

Artificial Neural Networks (ANNs) take their inspiration from how brain functions. Just like the human brain, neural networks also make use of neurons, axons, synapses and dendrites to transfer signal from the input layer of the network to the output layer. Neurons can be considered as simple computing cells and are fundamental to the operation of ANNs. ANNs use a large interconnection of neurons for optimal performance. Axons are the transmission lines or the nerves through which the signal travels between the neurons. Synapses are the nerve endings and Dendrites are the receptive zones of the neurons.

The key function of a neuron is to apply weights to all the inputs signals received from other neurons, add them and apply a bias and an activation function to limit the amplitude of the output. Once the signal travels through the network and reaches to the output layer, it is compared with the desired output using a cost function and an error signal is fed back to the network. The network then adjusts the weights applied to the signals and re-transmits an updated signal towards the output layer. This process continues until the error gets reduced to acceptable limits. Once this is done, the network training completes and it becomes ready to make predictions on the unseen data.

A large number of ANN variants have been developed with specific use cases. For example, Convolution Neural Networks (CNNs) take inspiration from visual cortex and are primarily used for image classification, Long Short Term Memory (LSTM) networks are used for natural language processing, Auto-encoders are used for image reconstruction, noise removal etc. The list goes on and on. The key difference among these variants is how the network layers are arranged and how the signal traverse through the layers in the forward and backward directions.

GENETIC ALGORITHMS

Genetic algorithms are optimization algorithms that takes inspiration from evolution in nature. Nature optimizes the fitness of a species over succeeding generations through propagation of genes. Similarly genetic algorithms work by evolving successive generations of genomes which get progressively more and more fit over the generations. Just like nature has them for evolution, the key processes involved in genetic algorithms are Selection, Crossover and Mutation.

An initial population of genomes is randomly generated. The process of Selection involves testing the fitness of genomes using a fitness function. All the weak genomes (w.r.t. the fitness function of choice) are discarded and the strong genomes gets to the next stage called crossover.

Crossover is analogous to reproduction in nature. It results in generation of two new genomes from two existing ones. Crossover starts with selecting two fit genomes and choosing a random position in the genome strings. This random position is called as crossover point and the parts of genome strings are swapped at this point. The resulting genomes have a piece of their genetic code from their parents. However, as it also happens in nature, it is not guaranteed that the crossover will result in children genomes who are always better then the parent genomes.

Sometimes, if all genomes in a generation are very similar, not much improvement is seen in the next generation. Mutation helps in these scenarios. In mutation, random parts of a genome are changed to result in a new genome that is drastically different from the rest of genomes in that generation. Like nature, mutation is used rarely and the results can’t be predicted.

Genetic algorithms find application in optimization problems, training neural networks, image processing and so on.

SWARM/COLLECTIVE INTELLIGENCE

Swarm intelligence is group of optimization algorithms that takes inspiration from how collective intelligence work. Some of the very effective results of evolution are insects like bees, termites and ants. While individually any of these insects may not appear to be very intelligent, they cooperate and work as a group to find solutions to complex problems like finding an optimal route to the source of food.

Consider how ant colonies find the best route to the source of food. Ants roam around until one of them has found a source of food. The ant which found the food source leaves a trace of a chemical called pheromone while coming back to its colony. This pheromone trace works as a guiding mechanism for other wandering ants using which they reach to the same food source. These ants also leave a trace of pheromone while returning. As a result, the pheromone trail becomes stronger and stronger with more ants using the same path to reach to the food source and return. Eventually all ants start following the same path to reach to the food source. Some applications of Swarm Intelligence include Ant-based routing, crowd simulation etc.

REINFORCEMENT LEARNING

Reinforcement learning takes inspiration from psychology where learning occurs via interactions with environment. It is a completely different paradigm as compared to supervised and unsupervised learning.

Unlike supervised learning where the goal to be achieved is fixed and algorithm is taught to learn using labeled data, or unsupervised learning where labeled data is not available and the algorithm has to identify patterns in data and segment the information, reinforcement learning is a goal oriented learning where the algorithm is not taught what actions are required to be performed. Instead, it learns from the consequences of its actions. For each action performed, there is a reward or penalty associated. A reward is given for the actions that helps in achieving the ultimate goal and a penalty is given otherwise. Through hit and trial, the algorithm learns to perform the actions that makes it win rewards.

This also involves the concept of exploration and exploitation. The algorithm can either explore different actions which may (or may not) result in rewards or it can continue to exploit the same action which resulted in a reward in the past. With exploration, there’s a higher possibility of getting penalties instead of rewards but there are opportunities of finding best actions that may lead to achieving desired goal. With exploitation, rewards are guaranteed however best actions may never be found. Hence there’s a trade-off between exploration and exploitation that needs to be balanced.

Reinforcement learning has a variety of applications including trading & portfolio optimization, computer gaming, robotics, e-commerce recommendation systems, medicine and health care, supply chain optimization etc.

--

--

Nishant Rastogi
Firing Neurons

Experienced Engineering Leader with nearly 20 years of expertise in developing and delivering data and analytics solutions for global organizations.