Building artificial brains in computers. How can we study the brain, and why is it important?

Karol Chlasta explains computer-based brain simulations, why they are critical for learning more about our brains, and why the 21st century is shaping up to be the century of the brain. The article was prepared as part of interdisciplinary Ph.D. studies in the area of ICT & Psychology, run jointly by the Polish-Japanese Academy of Information Technology and SWPS University in Warsaw, Poland.

PJAIT
crossing domains
9 min readMay 31, 2022

--

Each of us has a brain. The brain is the most complex object in the known universe. Physicists often define the complexity of objects in the universe by the power of 10. The brain contains 100 billion nerve cells, which are called neurons. 100 billion is the 11th power of 10. Each of these cells connects on average to 10000 other cells. 10000 is the 4th power of the number 10. This means that the complexity of our brains is equal to 10 to the power of 15. If we consider all the objects in the universe, it turns out that there is no single object that is more complex than a human brain.

Photo: pixnio.com

The human brain is also the most important organ in our body. It controls all our organs, our perception of the world, our consciousness, emotions, and it makes us who we are. It is estimated that the speed with which our brain processes the data is 4 Terabytes per second.

As the brain is so complex and performs so many important functions, we can consider it to be the most perfect computer we have ever known. It is estimated, depending on the sources, that the brain uses only 20 to 40 Watt, so as little energy as a light bulb. This means that our brain allows us to process unimaginable amounts of information in a very efficient manner. As an information processing system, it is so perfect that we are currently unable to create any computer that could cope with tasks of similar complexity in an analogous manner. Each one of us has such a powerful computing device at our service.

As human beings, we are curious about the world. We want to understand how this world works. We want to learn how our brain works for pure curiosity (as it’s so complex), or for medical reasons (to cure diseases), or to build better (maybe even intelligent) machines. I will explain later why according to some, the first decades of the 21st century will be the decades of the brain. Unfortunately, we are not always able to study a living brain, and there have been several attempts to overcome this challenge.

In April 2013 President Obama announced the BRAIN initiative — Brain Research through Advancing Innovative Neurotechnologies — committing 100 million dollars to brain research. The BRAIN project came after a similar, larger initiative in Europe — the European Union’s Human Brain Project (HBP), which was given 1.3 billion dollars in funding. The HBP came under some controversy this summer when neuroscientists complained about how the money was being awarded. — Steven Novella, Brain Research in the 21st Century

How can we study the brain?

Nerve cells can be studied using electrodes inserted into the brain. Unfortunately, the insertion of such an electrode destroys whatever it encounters on its way. It could even destroy the very cells that we want to examine. As one can imagine we would not be able to introduce 100 billion electrodes into any brain without killing it. This simple technological constraint forces us to study the brain macroscopically, for example using neuroimaging methods, magnetic resonance imaging, or studying brain activity in electroencephalography (EEG).

However, if we want to study the activity of individual cells, our possibilities are limited, both technologically, economically, and ethically. Therefore, the only method at our disposal is a computer simulation. This is the method that I progress throughout my Ph.D.

How does our brain work?

The brain contains 100 billion nerve cells, which are called neurons. Neurons are its basic computational units and are organised into columns. In computer science, a computational unit is a unit that can process information, having some inputs and outputs. Each neuron has a cell body, synapses, which we can think of as its input device, and a long axon (resembling an electrical wire), which we can think of as its output device.

Unlike in traditional computers, in brain circuits, we do not deal with an electron conductance, but with a very clever mechanism for exchanging sodium and potassium ions. The activity of neurons seems simple. A signal enters via synapses into a neuron. If the right conditions are met, the neuron sends an amplified or attenuated signal, which is transmitted along the axon to all the neighbouring nerve cells. Every single neuron connects to around 10000 other neurons, so one can imagine that the connectivity of this wiring within our brains is extremely complicated. The other complexity is related to the conductance mechanism, where sodium, potassium ions, and chloride ions travel via a system of ion channels and ion pumps through cell membranes. Figuratively speaking, the signal is transmitted through these networks similar to a ‘Mexican Wave’ traveling along the stadium during a football match.

Photo: Wiktionary

How can neuronal networks be replicated in a computer?

This seems simple: we could first build a model of a single neuron, then connect a few neurons together into a network, and then just increase the size of that network to the size of the entire brain. The challenge that remains is how to build a model of such a neuron? To understand how this was done we have to go back to the 1950s when English scientists Alan Hodgkin and Andrew Huxley (HH) developed a model (later named after them) that described the principles of a biological neuron. They researched the giant squid. This animal was chosen for the very simple reason that its axons were extremely large; their diameter was approximately 1 millimeter wide. The technology of the mid-20th century allowed HH to use the fine capillary electrodes, small enough to be driven into such large axons, to measure electrical changes within the axon during an action potential, and thus study the real neurons empirically.

Photo: Mind the Graph on https://mindthegraph.com/, used under the CC BY-SA license (https://creativecommons.org/ licenses/by-sa/4.0/deed.en)/derivative of the original

The HH model is one of the foundations of cybernetics: the science that studies the general principles of how systems work, including the control and associated transfer of information in man and machine. A 25 mm fragment of squid’s neuron was constructed by building appropriate electrical circuits containing inductors, capacitors, and several resistors. It was a great achievement that in 1963 led to the awarding of the Nobel Prize in Physiology or Medicine.

It later turned out that the dynamics of the squid neurons are similar to the dynamics of neurons of other animals, including humans. Therefore we do owe a lot to the humble squid in brain research.

Why is this discovery so important in the 21st century?

To describe how such an electric circuit proposed by HH works, we write Kirchhoff’s Law for such an electric circuit. This law describes the behaviour of current in a circuit. So to simulate experiments on a giant squid, we could mathematically describe the system with equations based on Kirchhoff’s Law.

Hodgkin and Huxley found the coefficients in the equations dictated by Kirchhoff’s Law by studying the giant squid empirically and then wrote down differential equations that modeled the currents involved. As a result, they were able to build an electrical machine that behaved identically to a 25mm fragment of giant squid neurons.

The behaviour of both electric circuits is the same if and only if these coefficients have the proposed values. The degree of complication of these equations was high in the mid-1950s, as these were nonlinear differential equations. There are currently no analytical methods for solving such a system of equations.

In order to solve such equations, they would have to use either simplified methods, thus discarding the ideal simulation of reality, or to wait for the next breakthrough in science, as there were no such methods until computers become powerful enough to solve such a set of equations in a numerical way.

By being able to calculate what comes out of the HH equations, we can draw conclusions about the operation of such a neural system of simulated neurons in the same way as if we were observing them in a laboratory, without hurting, or killing any laboratory animals.

https://www.outdoorlife.com/blogs/gun-dogs/2009/11/how-kill-your-dog/

Start building your own models, it’s easy

Even though it has been 50 years since then, you still have nothing better than the Hodgkin-Huxley neuron model. What has changed is that we now have very fast, and cheap computers that can simulate a very large number of different neurons in a relatively short time.

My current research focuses on building different models using artificial neural networks to help simulate various elements of the brain, as well as to draw some conclusions on how we could be building more intelligent machines of the future. This research is usually carried out on large computational clusters, but before buying one, I decided to build, and benchmark a small model on a cheap single board computer called Raspberry Pi running GENESIS software.

I simulated a few cells. The below figure illustrates the changes in a neuron’s membrane potential (in millivolts) acquired in the simulation (so without hurting any mice, dogs, or humans).

As a benchmark, I have also built a simple model of the visual system based only on the retina and one neuronal column using 1040 HH neurons. I presented the patterns of 1 to 9 and P, J, A, T, K to that model or artificial eye.

I have run this model in the background on my single board computer and a large 1000-core cluster.

As we can see the average execution time for my model was only 25% faster on a large cluster when run in the background, and 39% faster when run in the foreground.

My conclusion is that we do not always need supercomputers to start learning about brain simulations. Simulating smaller models with their help is not even very much faster. The good news is that at the moment everyone can afford the computing power and start neural circuits from the brain, and model how they connect. A small network of neurons could be simulated even on Raspberry Pi, and the models built on thousands, or dozens of thousands of artificial neurons can be simulated in “a reasonable time”.

If you are interested in building some new, maybe more complex models using neural networks please do not hesitate to contact me.

About the Author
Karol Chlasta
is a Ph.D. candidate in Computer Science at ICT & Psychology, an interdisciplinary Ph.D. programme run jointly by the Polish-Japanese Academy of Information Technology and SWPS University of Social Sciences and Humanities in Warsaw, Poland. He graduated from Cracow University of Economics with M.Sc. in Economic Computer Science (2008) and completed a Postgraduate Diploma in Business Analytics (2015) at the Institute of Computer Science at the Faculty of Electronics and Information Technology at Warsaw University of Technology. Outside of academia, Karol is a certified IT specialist, consultant, and trainer (HCAI — Huawei Certified Academy Instructor), as well as an IT manager. His research focuses on Neuroinformatics, Artificial Intelligence in clinical applications, and Social Informatics. More information at http://karol.chlasta.pl/

--

--

PJAIT
crossing domains

Writer, editor and curator overseeing the Crossing Domains blog by the Polish-Japanese Academy of Information Technology.