AI: Neural Networks
Since the 1927 release of the classic silent film Metropolis, the concept of Artificial Intelligence (AI) has been heavily explored in various forms of media. Until recently, however, limited computational power has kept it squarely in the realm of science fiction. Now, with the massive leaps in computing of the past few decades, AI has finally become a viable tool — and a huge topic of discussion within the tech community. We may not yet be in the age of robots that can reason on their own, but the AI of today can still be used to solve some of the world’s toughest problems.
The definition of AI has changed as it has improved and the idea has become more complex throughout the years. In the 60s, AI was generally defined as the ability to perform a series of operations based on a set of rules. By today’s standards, this better describes simple programming. Modern AI is associated with adaptive behavior rather than the execution of simple preordained tasks. This adaptive nature is driven by machine learning, commonly built on artificial neural networks (ANN). An ANN is exactly what it sounds like — a collection of artificial “neurons” interconnected in a network that resembles an organic brain!
Properties of Biological Neurons
A neuron is a specialized brain cell that receives chemical messages through branch-like extensions called dendrites. These messages are processed in the body, or soma, of the neuron and propagated as electrical currents through a trunk-like extension called an axon. The ends of the axons form synapses with the dendrites of other neurons, which allow messages to jump across into the next cell. The next neurons in the chain receive the message, and start the cycle over again. Many, many neurons connected to one another in this way form a (biological) neural network.
Computational neurons are much less complex, and are often represented as connected nodes in a simple graph. The biological aspect is gone, but the behavior of each neuron is conceptually the same — they receive, process, and spread information.
For example, in the soma of a biological neuron, there is a phenomenon known as integration where the neuron decides if it will pass a message to a successor neuron. This decision depends on whether a given neuron reaches its firing threshold — i.e. the neuron builds up enough electrical charge to send out a signal. This can be modeled computationally by programming each artificial neuron with an activation function. An artificial neuron will only “fire” if the messages it receives satisfy this function. By simulating the biological integration process, we can start to build a network of artificial neurons which selectively pass messages just like the real thing. As artificial neurons are programmed with several other features which mimic biological neurons, their combined capabilities empower the network with the ability to learn!
Organic Learning vs. ANN Learning
Whether you’re practicing free throws on the court or memorizing the bones in the human body, repetition is the key to learning new skills and information. This is because each repetition builds new connections between the dendrites and axon terminals of adjacent neurons. This connection, or synapse, can also be further strengthened or weakened depending on the activity between the two neurons.
Repetition when learning a new task leads to more communication between the neurons, which strengthens the synapse. This strengthening is known as Long-Term Potentiation (LTP). Conversely, a decrease in synaptic activity — known as Long-Term Depression (LTD) — can occur when a skill is not regularly repeated. Forgetful neural networks aren’t all that useful, so this is not often used in artificial systems, but it is a big part of how biological networks work.
To make a neural network learn, we have to model LTP, and emulate the strengthening of synaptic interaction. The strength of the interaction between two artificial neurons is represented numerically as the synaptic weight. Axon terminals and dendrites are replaced with minimization techniques, which adjust the synaptic weight between two or more artificial neurons. This adjustment is how the model learns!
Artificial neurons allow us to use the good aspects of biological neurons, and leave the bad parts behind. Once a neuron can be simulated, our focus shifts to the two things which regulate synaptic weight. The first of these is the aforementioned activation function. The second — and arguably more important — factor is the network architecture — the way in which the neurons are organized and connected.
As humans we have the ability to perform complex tasks such as dancing, speaking, and memorizing facts. All of these tasks differ in complexity and thus require different types of networks in order to make them possible. When dancing we use a part of the brain known as the cerebellum, whose major role is to help us maintain proper timing and coordination. On the other hand, when trying to memorize facts or recall facts, we mainly use the hippocampus. Neuroscientists have found that the architecture of neural connections varies widely depending on the purpose of the region of the brain. The neurons in the cerebellar cortex differ in type and connectivity from the ones in the hippocampus. Each region is optimized for different tasks, and this is reflected in their different compositions. In a similar way, artificial neurons can be arranged in a wide variety of architectures, in whichever manner is best suited for the particular problem the network is attempting to address. The architecture of an ANN for image recognition, for instance, will be very different from that of an ANN trying to predict stock market crashes. Some examples of common architectures used today are Deep Convolutional Networks (DCN), Recurrent Neural Networks (RNN), and Deep Belief Networks (DBN). These networks differ in the organization of their layers and in how their neurons are connected. By manipulating existing architectures and techniques, researchers are constantly developing new approaches for problems that are too challenging for current methods.
Applications of ANN to Biotechnology
The theory behind neural networks has been around for the better part of a century, but due to advances in computer hardware and a wealth of data to learn from, they are now a reality. Thanks to massive, easily-accessible data sources such as UniProt and the PDB, the biotech industry has been able to make incredible use of ANNs. Researchers today use them to model proteins, diagnose diseases, and create designer antibodies. And that’s just the tip of the iceberg!
The adaptive, tireless learning of neural networks and AI not only capture our imaginations, but also serve a real purpose in almost every industry. Neural networks may be constrained by the computational limits of today, but as technology improves and more data becomes available, the possibilities are endless!
Citations and Links
1. The Wellcome Collection https://wellcomecollection.org/
2. Fa, Rui, et al. Predicting Human Protein Funciton with Multi-Task Deep Neural Networks. PLOS ONE, Public Library of Science.
3. A. Khilkevich, J. Zambrano, M.M. Richards, M.D. Mauk. Cerebellar Implementation of Movement Sequences Through Feedback. elife 7:e37443
4. D.H. Ackley, G.E. Hinton, T.J. Sefnowski. A Learning Algorithm fo rBoltzmann Machines. Cognitive Science 9(2): 147–169, 1985.
5. J. Jordan. Convolutional Neural Networks. https://www.jeremyjordan.me/convolutional-neural-networks/
Looking for more information about Macromoltek, Inc? Visit our website at www.Macromoltek.com
Interested in molecular simulations, biological art, or learning more about molecules? Subscribe to our Twitter and Instagram!