The future of perception: brain-computer interfaces — part 1

Philipp Markolin
Advances in biological science
15 min readSep 23, 2016

The science behind the human brain

Image source and excellent neuro-tech blog: Convergent science network.

Part 1: The science behind the human brain

If you were ever interested in understanding our world, sooner or later you will arrive at our brain; it is the host and manufacturer of our thoughts and feelings, it shapes our actions and decisions, produces our consciousness, creates what we call “us”, our personality and individuality. It is also the only structure in the known universe potentially able to understand itself.

To gain insights into the inner workings and concepts of the human brain, we will have to tackle the problem from several fronts:

  • Anatomy
  • Complexity
  • Computational power

So far, the brain has not been scientifically solved. However, scientists have been working hard to accumulate data, one puzzle-piece at a time, in the hope that in the future all of those pieces will come together. What they have learned so far has already improved countless lives, either directly by the development of treatment strategies or drugs for patients, or indirectly by creating smarter computers based on operating principles derived from our brain. Let’s take a look!

Anatomy of the human brain

As an old rule of thumb, a bigger brain usually meant that one was more intelligent. Contrary to public opinion, humans do not have the biggest brain of all animals in nature; whales and elephants and dolphins all have bigger (volume and mass) brains than humans. To account for these obvious size differences between whales, elephants and humans, people started comparing relative brain sizes, or brain-to-bodyweight-ratios; where humans would be far superior to above mentioned animals. However, while humans are almost on top when in comes to brain-to-bodyweight-ratio, we would come in second against smaller animals like the common shrew mouse or any bird. So clearly brain size or ratio alone is insufficient to explain intelligence, one has to also account for species differences.

Regarding all primate species, we have by far the biggest brains. That has not always been the case. Historically we, homo sapiens, are still losing out against our ancient companions, homo neanderthalensis. Their brain size was around 10% bigger on average, yet it was us, the smaller-brained fellow humans, who ultimately outcompeted and drove them to extinction. What this means is that the biggest brains did not always win, at least the evolutionary battle.

Quite astonishingly and despite all the caveats mentioned, many scientific studies find that brain size does indeed correlate with intelligence.

Why homo sapiens’ slightly smaller brains won the evolutionary battle against the Neanderthals is mostly speculative; some researchers claim that Neanderthals were not actually smarter abstract thinkers, but they needed bigger brains because they were quite athletic and had more peripheral vision (their eyes could see more than ours) to process. Furthermore, they were in need of more brain power to keep enormous mental maps of their hunting grounds and terrain knowledge stored in their memory.

However, once the agricultural revolution hit, modern humans succeeded by specializing into distinct roles rather than being lonely do-it-all’s, giving our societies reproductive advantages. So apparently we won against Neanderthals because we relied on the knowledge or brain power of others, and not solely on our own. Another speculation expands on the fact that intelligence is also correlated with nutrition, so maybe the last Ice Age changed food availability and diet for Neanderthals, prompting their slow demise. No matter the reasons, history was written by the winner, homo sapiens. But do not feel too bad about the Neanderthals; there used to be significant mingling between the hominid species, in fact, an estimated 1–4% of our DNA is still originally Neanderthal.

Back to our brains anatomy. One other way to measure intelligence is number of neurons, an estimated 86 billions for humans, connecting to each other via an estimated total of 100 trillion (10¹⁴) synapses (neuron-neuron wires); If the synaptic density, together with brain size and sheer number of neurons, is ultimately the highest indicator for anatomical intelligence capabilities remains to be investigated.

So far, we have not found any other species that combines high brain-to-body mass ratio, high absolute number of neurons and high synaptic density as masterfully as humans do. Maybe this combination was the “jackpot” for intelligence.

However, keep in mind that we still lack the power to process and understand the brain at this synaptic level. There might be even more to intelligence than just synaptic density in huge brains with lots of neurons.

In any case, what scientists started doing decades ago was mapping brain regions, first via dissecting the brains of dead humans, and later supported with ever-improving imaging techniques like CAT scans or fMRI. What they found is that our brain is highly specialized and that the cerebral cortex is overly developed compared to brains of other species. Furthermore, we were able to associate brain regions with functions, enabling unique insights into where processes like speech, vision, hearing, pain or memory are located.

Ever improving brain mapping efforts show that the cerebral cortex is highly specialized in humans. Image source

As of today, we have identified a great deal of the brain on a superficial level. How superficial?

Imagine a bird’s eye perspective, you sit in a plane and look down upon a big city. You might be able to differentiate between the harbour and downtown, maybe even in which regions are industry complexes and where do people go for entertainment. If you’d really focus, you might even make out one or the other great building. But there is no chance that you will be able to tell what any one human is doing, lest then keeping track of all of them at the same time. This is roughly where we stand now in anatomical brain research.

We might develop better “telescopes” to look down deeper into our brain, but we are still too far away from being able to simultanously look everywhere in great detail.

Which brings us to the next chapter:

Complexity: The human brain is the most complex structure in the universe

The neuroscientist and Nobel-price winner Gerald Edelman describes the human brain as the most physically complex object in the known universe.

What does he mean by complex? Complexity describes the behaviour of a system or model whose components interact in multiple ways and follow local rules, meaning there is no reasonable higher instruction to define the various possible interactions.

The internet is complex. So is an ant colony. Or a cell. Even the freaking weather.

The interesting features about any complex system are so-called emergent properties; abilities or entities that are born out of the interactions of smaller components that amount to some quality larger than the sum of its pieces. For example:

Imagine nuts and bolts coming together in a specific way to build a clock. Then this clock can do something that none of its parts alone will ever be able to do: measure time. Thus, in this example, the ability to measure time is an emergent property of coordinated and intricate interactions between small metal pieces and electrical current in a certain way.

This is a quite theoretical description for something as obvious to understand like a clock, however these properties of complexity can be observed everywhere in our universe. Life is an emergent property of chemistry. The laws of thermodynamics are emergent properties of particle physics. Weather is an emergent property of different temperatures and water molecules. Ant colonies and bee hives emerge complex behavior although every single member is only an autonomous unit that reacts depending only on its local environment and genetically encoded rules. The price regulations at the stock market and the power-law of links connecting the internet are all emergent properties that no single agent controls and no central planning caused.

The unintuitive part about emergent properties in nature is that they just seem to happen, many little chaotic pieces coming together by chance to build something bigger.

In that sense, life is equivalent to a magically self-assembling clock.

What does this have to do with our brain?

Given the propensity of single units in vast numbers to create emergent properties, as well as the fact that our brain consists of billions of neurons and glia cells (which are cells who themselves consist of trillions of chaotic molecules), what we define as thinking or cognition or consciousness is very likely an emergent property of this macro-system of cells we call our brain. If life could be created out of inanimate chemicals, is it really unimaginable that cells could create consciousness?

Gerald Edelman is defining complexity as a highly improbable arrangement of enormous numbers of diverse molecules and compounds, interacting in intricate concatenations of systems of systems of systems of systems. Biological things are generally more complex than non-biological things. Brains are generally more complex than other biological things. Human brains are more complex than any other animal brain.

For us scientists, this complexity is a major hurdle to understand how our brain does all the amazing things we know it can. Learning. Memory. Calculating. Creativity. Feelings. Sensory input interpretation. Consciousness.

Did you never wonder how it is possible that your eyes can show you all the beauty of this world, when everything your retinal cells ever receive are some photons of different wavelength?

These in turn only trigger electro-chemical signaling events that run through your “wires”, until the brain reads them and produces from them the beautiful pictures we see in our minds.

No matter what sense (touch, smell, taste, sight and hearing) gets activated, what ends up reaching our brain via the central nervous system are only electro-chemical impulses for our brain to interpret. (We need to keep this in mind, as this is one crucial fact for brain-interface technology!)

Our brain has to do all the work! It needs to interpret electrochemical signals from all different sources and accurately produce sound, vision, smell and so on for our mind to process. Also memory and speech have distinct electro-chemical activation patterns that need to be created and recognized by our brain. Finally, the “wiring” situation is not clear at all. We call it synaptic plasticity; a technical term for the behavior of neurons to build and destroy synaptic connections (=wires) at a rapid pace and in response to activity. Neuroscientists have the saying:

“What fires together, wires together.”

Which makes the brains complexity even harder to understand, it changes wiring constantly over time and one can not seperate it from its environment.

The human brain is the one object in our universe with the highest amount of complexity for its size and mass, truly biology’s pride of creation.

So will we ever be able to understand the brains complexity?

Here, opinions vary widely. Most optimistic and progressive thinkers believe that we will have functionally “solved” the brain in less then 20 years, considering huge advantages in imaging technologies as well as computational power to simulate neurons.

Most scientists are more conservative, stating that we might understand most parts of the brain eventually, but total understanding will be quite elusive for way longer. Brain research definitively saw huge financial commitments from governments in the last years. EU’s colossal collaborative human brain project, or Obama’s BRAIN initiative, or the announcement of the Chinese brain project all speak clearly to the need and will to do more brain research.

The human brain project trys to digitally map and simulate an entire human brain.

However, some people like Microsoft co-founder Paul Allen claim that the brain’s “complexity problem” is widely underestimated by many of today’s futurist thinkers who predict complete understanding of our brain in the next 2–3 decades. He argues that every time we look deeper, it will keep on getting more complex, slowing understanding down more and more.

If history is an indicator, he might have a solid point. For all we learned about the brain, the less we can predict because of its intricate complexity. We have no complete understanding of the “ground level” regulation. Even a basic neuron is above our understanding, nor can we predict, follow or only observe every single chemical unit in just one cell. And don’t get me started on the physics of quantum mechanics describing movement at sub-molecular scales.

If one would need to understand the brain at sub-molecular detail, it would be forever out of our reach.

However, as scientists, we are not known to give up easily just because something is complex or complicated. There is still a different aspect on how one could think quantitatively about the brain:

Computational power: The brain as a biological machine

A cell is a biological machine. The brain is a conglomerate of a few different biological machine-types (named neurons and glia cells) in extremely large quantities. Neurons have a characteristic behavior to either fire or not fire electrochemical signals; a binary output like 0’s and 1’s. Not unlike a computer.

But what about complexity?

Large assemblies of similar or identical units can be described statistically, using mathematical models. Or physical laws.

If we had to understand every single molecule’s movement in a gas, or electrons in a current, then there would be no engineering, no rocket science, no chemistry and no technology in general.

Statistical concepts like the “ideal gas” allow scientists and engineers to approximate the behavior of huge collectives, thus being able to predict and use the behavior patterns to design those neat devices that improve our lives.

Many scientists believe that one does not necessarily need to understand the chaotic subunits of the brain to be able to derive certain laws of how the brain conducts its tasks.

After all, we were able to derive the laws of aerodynamics and built airplanes in 1903, 10 years before Niels Bohr described his model of the atom. Last I heard, air consists of mostly nitrogen and oxygen atoms and how they can be compressed and behave as a bulk is all aerodynamics is about, not individual atom’s motion behaviors.

Another important observation comes from theoreticists:

The 2.9 billion base pairs of the haploid human genome correspond to a maximum of about 725 megabytes of data, since every base pair can be coded by 2 bits. Since individual genomes vary by less than 1% from each other, they can be losslessly compressed to roughly 4 megabytes. — wikipedia

But our brain has well over 86 billion (10¹⁰) neurons connected to each other via 10 trillion (10¹⁴) synapses which can give a binary output. So even under most conservative estimations, we would need more than 10 terabytes of data in our genetic code to describe what we observe in our brain. We simple lack this amount of coding information for building individual neurons, thus neurons and synapses have to be acting according to certain repetitive rules that can be encoded in our genome with way less information.

Furthermore, today we have plenty of experimental evidence that there are laws which govern how your brain works.

For example, by just mimicking certain anatomical features and principles of our neurons, computer scientists started the second artificial intelligence revolution quite recently. While the concepts of how the neurons work have been around for 50 years, we lacked the technology (computational processing power) to digitally recreate virtual neurons in networks.

Computer scientists got inspired by biological neurons to built artificial digital neurons to perform computations. Image Source

The successes of this “imitation” approach have been stunning. Deep learning and convoluted neural networks are taking over tasks that we thought impossible for a machine to fulfill. While it was unthinkable to ever mistake a computer-generated human face, poem or artwork with a real human a generation ago, the line has gotten very blurry right now. We have now computers understanding human speech, reading human handwriting, beating us at our own games like Jeopardy and Go. Even things like virtual reality or augmented reality (Pokémon Go) are gradually losing their sci-fi touch, as we get a better understanding of our neurobiology and constantly increase our computational capabilities.

The successes of artificial intelligence have been so swift and striking, it begets a questions that boggles computer scientists and neuroscientists alike:

“Why the hell do brain inspired computer algorithms perform so well on human tasks?”

The only probable answers boil down to: because either our brain behaves like a general purpose computer, or the brain actually uses algorithms that can be put into a machine.

Computer scientist, futurist and author Ray Kurzweil argues in his book “How to create a mind” that our brain is a general purpose interpretation machine with unique pattern recognition capabilities. While Kurzweil’s statements about the future of technology are controversially received, scientists could demonstrate repeatedly that pattern recognition is a powerful tool used by our brain to fulfill tasks from vision and hearing to memory and cognition.

Remember we mentioned how only electro-chemical pulses (scientifically called “action potentials”) reach our brain when our retina cells receive photons from light?

It turns out that when it comes to our senses, our brain just learned to interpret the patterns of electro-chemical signals (=action potentials, binary 0’s and 1’s) that reach it via the central nervous system. Similar to how a computer can make digital images in the form of 0’s and 1’s and recreate pictures for us to see by reading that binary code. This is how we recognize faces or places, remember smells or colors or tastes.

Also similar to computers, our brain can be subjected to “software bugs”

For example, if the 0’s and 1’s (action potentials) reach the wrong neurons, either via faulty wiring (synesthesia) or a neurosurgeon putting interfering electrodes, people suddenly start seeing a specific color when they hear a specific sound, or feeling touched when confronted with a word.

Electrical brain stimulation is a funny and mysterious act in itself; from open brain surgeons reporting patients laughing when stimulated in the lower frontal lobes of the brain, to the much publicized and controversial “god helmet” neurostimulator, it seems unequivocally clear that electrochemical signals (= action potentials) are the information currency of our brain.

If we can accept that either computers are somewhat brainlike, or brains have things in common with computers, one can ask some fascinating questions that belong more to the realm of physics than biology:

  • how many operations (FLOPS) can our brain perform per second?

Five credible estimates of brain performance in terms of FLOPS that we are aware of are spread across the range from 3 x 10¹³ to 10²⁵. The median estimate is 10¹⁸ — AI impact

  • how much energy does the brain need to perform calculations?

Compared to computers, very little. While the correlation between analog brains vs digital computer simulations of brains is not equivalent to respective energy needs, one consuming biological energy units (ATP), the other using watts, computers in general need to use up crazy amounts of energy to calculate/process e.g. pattern recognition software, while humans can do the same thing better and burning only little calories. Researchers are still on it.

  • can we built computers that match the brain in power?

So far, the researchers were not able to simulate the brain’s activity in real time. It took 40 minutes with the combined muscle of 82,944 processors in computer “K” to get just 1 second of biological brain processing time. While running, the simulation ate up about 1PB of system memory as each synapse was modeled individually.ExtremeTech in reference to Japan’s supercomputer named “K”

Physics alone cannot explain the brain. However, physics help a lot when trying to understand the brain’s operating principles.

While some biophysical questions can be answered or approximated via back-of-the-envelope calculations, by far not everything that happens in the brain can be reduced down to physics or computation.

However, adding a strong physical view to the brain has inspired engineers to mimic nervous systems in designing artificial cochlea, retinal implants, and brain–computer interfaces (BCIs) for communication to improve the quality of life in patients.

Today, we are at a point where we understand just enough about the brain to be able to repair some gruesome biological defects with our technology. In the near future, we might undertake the task of not just repairing, but improving biology itself. One single discovery at a time…

This ends the first part of this article series on the future of perception.

What exactly is happening on the frontline of brain-computer interfaces (BCIs) research will be elaborated in the second part of this article series.

Summary part 1:

The human brain is special, it is the densest structure of complexity in the known universe. So far, we have achieved great insights into the organization of the brain on a macro-scale, which allowed us to map brain regions to functions like motor control, visual sensing, memory, cognition and feelings. However, we are limited by how deep we can look and still understand what processes are driving higher cognitive function, or what contribution individual neurons or cortical columns have to these processes. Furthermore, the sheer amount of neurons and synapses confront us with a complexity we cannot hope to understand but in a statistical and rule-based manner. Detangling the operational rules of the brain has been hard and is dependent on technological progress ranging from imaging, neurosurgery to higher computational power for simulation and new software and algorithm developments. The first successes of understanding the brain’s inner workings already revolutionized whole fields from psychology to medicine to computer science. What near-future developments will soon become our reality will be covered in the next part of this article series.

This story is part of advances in biological sciences, a science communication plattform that aims to explain ground-breaking science in the field of biology, medicine, biotechnology, neuroscience and genetics to literally everyone. Scientific understanding has too many barriers, let’s break them down!

You can also help us to improve by giving feedback. Your voice matters.

--

--

Philipp Markolin
Advances in biological science

Science holds the keys to a world full of beauty and possibilities. I usually try something new.