Image for post
Image for post

Introduction

Computers and brains think in completely different ways. The transistors in a computer are wired in basic arrangements known as logic gates, whereas the neurons in our brains are densely interconnected in complex, deep layers (each neuron is connected to 10000+ neighboring neurons). This structural difference is what makes our brain ‘think’ so very differently. Computers are designed to store and process vast amounts of information by following precise logical commands. Brains, on the other hand, learn slowly. Often taking months or years to make sense of a complex idea. But unlike computers they can put information together in astounding ways: recognize patterns, form connections, and see things in completely different ways. Neural networks are the computer scientist’s attempt at creating computers that are more like brains.

Overview of Biological Neural Networks

The human brain is incredibly complex, and perhaps the most powerful computing machine known. The workings of the human brain are often modelled around neurons and networks of neurons, known as biological neural networks. The human brain is estimated to contain a 100 billion neurons, all interconnected in complex pathways. Neurons interact and communicate with each other through an interface consisting of axon terminals that are connected to dendrites across a synapse(gap). In simpler terms, a single neuron will pass a message to another neuron across this interface; if the sum of weighted input signals from one or more neurons (sum) into it is great enough to cause the transmission of a message. This is called activation. The processing the brain carries out, and the instructions given out to various organs are the result of these networks in action. The brain’s neural networks are actively changing in several ways, including making modifications to the weighting applied between neurons. This happens as a direct result of learning and experience. Naturally, scientists and engineers have tried to replicate this functionality in computers, with the help of neural networks and machine learning as their applications are limitless.

Image for post
Image for post
A model of neurons firing and communication with each other.

Artificial Neural Networks

An artificial neural network consists of anywhere from a few hundred, to billions of artificial neurons called units arranged in a series of layers, each of which are connected to several more layers on either side. It is very much inspired by the biological neural network. Some are known as input units, which are designed to receive various information from the outside world, that the network will attempt to learn about, recognize or process. Other units sit on the opposite side of the network, and signal how it responds to the information learnt; these are known as output units. In between the layer of input units and output units are one or several hidden layers; where the magic happens. These units are fully connected to each other and together they form an artificial neural network. A computer brain of sorts.

The connections between one unit and another are represented by a number called a weight, which can either be positive or negative depending if it excites or suppresses another unit(neuron). The greater the weight, the more influence one unit has on another. This is similar to the way biological neurons trigger one another across synapses. What this means is that given a number, a neuron will perform some sort of calculation (for example, the sigmoid function), and then the result of this calculation gets multiplied by a weight as it travels through the network. Below is a diagram of neurons and synapses in the brain compared to artificial neurons.

Image for post
Image for post
(A) Human neuron; (B) artificial neuron or hidden unity; © biological synapse; (D) ANN synapses

How do they learn?

Information flows through a neural network in two ways. When its learning (being trained) or operating (after training). Patterns of information are fed into the network via the input neurons, which trigger one or more layers of hidden neurons, and these in turn trigger output neurons. This design is fairly common and called a feed forward network.

Not all the ‘neurons’ fire at the same time, each unit receives information from the units to its left, and the inputs are multiplied by the weights of the connections they travel along. Every neuron (unit) then adds up all the inputs it receives. In the simplest network; if the sum is more than a certain threshold value, the unit fires and triggers the units it’s connected to on the right of itself.

For a neural network to learn, there has to be an element of feedback involved. We essentially need to ask it a large amount of questions, and provide it with answers. This is a field called supervised learning. With enough question-answer pairs, the calculations and values stored at each neuron (unit) and synapse (connection) are slowly adjusted. This is usually done by a process called backpropagation.

We use feedback all the time. Imagine you’re walking down a sidewalk and see a lamppost. You have never seen one before, so you walk right into it and hurt yourself. The next time you see a lamppost you step aside a few inches and keep walking. This time your shoulder hits it, and your hurt yourself yet again. The third time you see a lamppost, you move well out of its way to ensure you don’t get injured. Except now you’ve stepped into a pothole and you have never seen one before. You trip, and the whole process repeats.

This is an oversimplification, but it is effectively what back propagation does. An artificial neural network is given a multitude of examples and then tries to get the same answer as the example given. When it is wrong an error is calculated and the values at each neuron and synapse are propagated backwards through the ANN so it can attempt the question again. After giving it examples to learn from, you can then feed it a question without it’s answer, and the network will attempt to solve the problem by recognizing patterns and drawing conclusions.

Image for post
Image for post
A simple backpropagation algorithm

In conclusion neural networks help us cluster and classify. They are used extensively in applications of machine learning, and artificial intelligence as they are great at recognizing patterns and then predicting outcomes. Advances in the field have allowed us to use technology in astonishing ways. In a field that attempts something as profound as modelling the human brain, it’s inevitable that one technique won’t solve all the challenges. For now, however, neural networks are leading the way in creating an artificially intelligent brain, and you now, have a high-level understanding of how they work.

Image for post
Image for post
Simple artificial neural network with one hidden layer

If you enjoyed this read, or have any questions feel free to reach out! Also hit that green heart to recommend it, or share with your friends!

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store