The future of perception: brain-computer interfaces — part 2

Philipp Markolin
Advances in biological science
14 min readMar 19, 2017

From neuroprosthetics to additional senses and the future of humanity

Image source and excellent neuro-tech blog: Convergent science network.

Part 2: From neuroprosthetics to additional senses and the future of humanity

Welcome to the final part of this article series. Here, we will take a deeper, more technical look on where science stands regarding brain-computer interfaces. If you want to know what this article series is all about, check out:

“The future of perception: brain-computer interfaces — introduction”

When science fiction becomes scientific reality

“The future of perception: brain-computer interfaces — part 1”

The science behind the human brain

Today, we are at a point where we understand just enough about the brain to be able to repair some gruesome biological defects with our technology. In the near future, we might undertake the task of not just repairing, but improving biology itself.

The last five years have seen incredible advances in brain related technologies. Two main factors can be named as the driving forces of the ever increasing repertoire of brain/computer related technologies:

First, the doubling of computational power every two years, also known as Moore’s law, brought computer hardware and calculating power for the first time in history close to what our brains can do. Yet, calculation power alone is necessary but not sufficient for intelligence.

Second, the A.I. revolution caused by development/discovery of new paradigms in software engineering, namely various forms of machine learning and pattern recognition algorithms.

Both forces combined have been changing the world left and right already, so it is fitting that these finally come home to revolutionize brain research, the biggest frontier of the 21st century biological research.

In a nutshell: A brain-computer interface is any technology which will allow humans to communicate/control or interact with a computer/electronic device via thought.

In the first part of this series, we covered why our brain is so special and how it can be understood scientifically. Here, we will take a look at what is already done and will hit the markets inevitably, before we dive into what these technologies might mean for our future. I separated the technologies according to which principles of how our brain works they exploit or can make use of.

Principle 1: Pattern Recognition

If the brain uses pattern recognition algorithms, computers can learn to read minds and act upon thoughts

Spinal cord injury (SCI) is a widespread, traumatic experience impacting the life and well-being of millions of humans around the world. SCI is the main cause of tetraplegia, which is defined by loss of partial or total function of all four limbs, meaning the arms and the legs. In addition to the harsh reality tetraplegic patients have to master on a daily basis, the emotional toll and enormous health-care costs for relatives is an additional burden. For millenia, the answer to this great tragedy was either death, or lifelong caring for the conscious minds imprisoned in mostly lifeless bodies.

Only the last decades saw some progress in that regard, a way for human ingenuity to defy the odds. Who better to exemplify this defiance than the case of Stephen Hawking, a genius theoretical physicist.

Hawking has a rare early-onset, slow-progressing form of amyotrophic lateral sclerosis (ALS) that has gradually paralyzed him over the decades. He now communicates using a single cheek muscle attached to a speech-generating device. While technically not a brain-computer interphase, his particular case shows that even very limited ability for generating output commands (in his case one single muscle twitching) coupled with a computer interface that uses pattern recognition allows him to still communicate complex ideas. At any other time in the history of mankind, this would have been unthinkable, but today, our human ingenuity allowed us to reach a point where we can give a voice to one of the brightest minds in history, despite his disability.

Why is this important?

Stephen Hawking’s story shows that elaborate computer algorithms can learn to execute a complex task like speech by just analyzing binary muscle-twitches. Recently, a study was published in the scientific journal elife expands on this idea. The researchers could show that brain-computer interfaces (BCIs) have the potential to restore communication for people with paralysis and anarthria by translating neural activity into control signals for assistive communication devices.

The average copy typing rates demonstrated in this study were 31.6 ccpm (6.3 words per minute; wpm), 39.2 ccpm (7.8 wpm), and 13.5 ccpm (2.7 wpm) for the three patients, respectively. — Pandarinath et al. Elife. 2017

While these results are encouraging, as they enable severely paralyzed patients to communicate by letting computers learn to interpret their neural activity, they are far from perfect yet.

As always, experimental research in labs is a step ahead of medical implementation, and BCIs in the lab have already advanced to a jaw dropping maturity.

Take for example research from the brain-mind institute of Switzerland’s renowned EPFL. Specializing in spinal cord injury repair, a group of scientists, doctors and engineers from different fields developed a wireless brain-to-spine interface able to bypass a severed spinal cord (= spinal lesion).

Rhesus monkeys were implanted with an intracortical microelectrode array in the leg area of the motor cortex and with a spinal cord stimulation system composed of a spatially selective epidural implant and a pulse generator with real-time triggering capabilities. We designed and implemented wireless control systems that linked online neural decoding of extension and flexion motor states with stimulation protocols promoting these movements.

Wireless communication of decoded motor states recorded by an brain implant to a pulse generator implanted lower spine allows the rhesus monkey to control his limb after spinal cord injury.

In summary, without pattern recognition software reading brain activity, severely paralyzed patients would be unable to communicate; in fact, reading brain activity is the only invention capable of easing the incomprehendable fate of locked-in syndrome patients.

Without pattern recognition algorithms, millions of people suffering from spinal cord injury today will never feel or be able to use their limbs ever again.

Without pattern recognizing speech generators, the voices of geniuses like Stephan Hawkings are destined to be silenced forever.

Yet with computer systems able to recognize patterns paired with some human ingenuity, there is hope for all of them.

Which begs one question: If elaborate computer algorithms can learn to execute a complex task just by analyzing the brain’s electrochemical activity, can we reverse the roles?

Principle 2: Generality

If the brain behaves like a general-purpose computer, it should be able learn from any signal input, natural or artificial

A cell is a biological machine. The brain is a conglomerate of a few different biological machine-types (named neurons and glia cells) in extremely large quantities. Neurons have a characteristic behavior to either fire or not fire electrochemical signals; a binary output like 0’s and 1’s. Not unlike a computer.

What is the essence of sensual inputs reaching our brain? When we look at the world around us, how does our brain understand what we see? How does vision work?

Functionally speaking, our eye’s shape and makeup serves as lens to focus photons onto retina cell layers. Within the retina cells, there is a specialized light sensitive protein called rhodopsin, a chromophore-containing receptor protein that triggers a signaling cascade in response to being hit by light of a certain wavelength. This signaling cascade leads to closure of cyclic GMP-gated cation channels, and subsequent hyperpolarization of the photoreceptor cell. What this means is that a light signal is transformed to a biological signal and this biological signal produces an electrical signal by modifying ion fluxes in retina cells. From here on, every information that gets transported to our brain is an electrochemical signal.

The same is true for all our other senses:

  • Audition, a form of mechanotransduction, uses specialized auditory cells in the ear (cochlea) to translate a mechanical signal (the pressure of wave vibrations we call sound) into electrochemical signals for the brain to interpret.
  • Olfaction and gustation are chemical signals (recognized by specialized ligand-receptor interactions) that are translated by respective cells into electrochemical signals for the brain.
  • Somatosensation, commonly known as touch, are nerve cells that work to translate many types of signals (mechanical, temperature/heat, chemical, stress/injury) into electrochemical signals for the brain.

Do you see a common pattern here? No matter what the source of external sensory information, for our brain, it is always electrochemical signals.

If our brain only needs electrical signals for sensing, then it must be possible to induce sensation with artificial electrical stimulation.

Furthermore, it follows that our brain should be able to learn from artificial electrical signals the same way it does from biological electrical signals, since there are qualitatively similar.

This principle has been shown to work extensively in neuroprosthetics research, from robot arms that feel touch to artificial retinas that let the blind see again.

If you are interested in the details, we have covered some of these innovations already:

I recommend to check out the video below to see the blindfolded tetraplegic patient identify correctly which finger is getting touched. We covered this whole story here.

Ignore the DARPA propaganda in this video, just focus on the research

The research mentioned above is only a small set of all available breakthroughs in this area; electrode implantation for messaging the brain is also at the moment one of the most promising treatment options (deep brain stimulation therapy) for e.g. epilepsy and other neurological conditions.

However, to cover every recent breakthrough in this area would exceed the scope of the article.

Yet, there is one more stunning implication of this principle we need to address:

The prospect of new, additional senses

If all we need for sensing is electrical signals to reach our brain, and if we can already compensate for organ functions as complicated as vision, can we create additional senses?

Bats and whales can hear sonar, birds can sense the earth’s magnetic field, snakes have infrared vision and thus can see heat. Why can’t we?

The short answer: We never evolved these senses because we did not need them for our survival. But does that imply we can never have them?

Since some time now, the idea of upgrading our senses (see UV/IR light, hear higher/lower frequencies) or implanting electronic devices that would give us new senses (sensing directions like a compass or feel presence of magnetic waves or WiFi) have been flung around. Sounds crazy?

Well, it works. At least rudimentary.

Liviu Babitz with his implanted North Sense. Image courtesy of Cyborg Nest

Mostly, those devices would hijack e.g. our inborn somatosensory system and stimulate it in order to generate a electrochemical activation pattern that our brain learns to recognize as reliable information. (You can check out this TED talk for more information)

So clearly there are certain artificial inputs from which our brain can learn to sense somehow. However, while this speaks volumes about how great our brain’s pattern recognition and generality software works (by being able to take any kind of electrochemical signal and learn from it), there are serious doubts about the possibilities and limits to this “sensory substitution” approach.

Our brain’s architecture has been shaped to do computation on the electrical inputs from our senses, which is why we have specialized brain regions for hearing and seeing, motor function and speech, among others. The reason why we cannot see infrared light has to do with the chemical properties of rhodopsin; yet even if we were able to stimulate the same retinal cells with tiny electrodes anytime infrared light hits our eyes, it is unclear if we would ever see infrared like snakes do, since our brain’s architecture might not be adequate to support this extra computation or the post-processing. Maybe we would just see all infrared light as different shapes of red, because this is all our consciousness can understand?

After all, when we see, we do not see photons of different wavelength (objective reality), what we see in our consciousness is colored pictures (virtual reality), decoded and stitched back together by our brain after original photonic information was translated into electrical signals.

Now imagine more abstract things like “feeling magnetism”, how would your higher cognitive brain functions develop a consciousness of it? Ad hoc and without millions of years of evolution?

We use to comprehend in terms familiar to us; it is unknown how our minds would “decode and stitch together” an additional sense for our consciousness to understand; Imagine for example the difficulty of explaining vision to a conscious alien species that has never seen photons because they lack eyes? Even if they shared similar brains, just giving them electrical shocks that respond to photon wavelength is not easily going to let them see, because their brain’s architecture was never built to support or project vision to their consciousness.

What this all boils down to is our lack of understanding for consciousness or mind, a problem that will likely not be solved scientifically for at least another few decades.

Does that mean that engineering true additional senses are impossible until then?

Not necessarily. Arguments from personal incredulity or lack of imagination are not logically valid to discredit the notion of additional senses. So far, we do not know either way. Also, there is one more principle of our brain we need to cover:

Principle 3: Redundancy

All stable biological systems are characterized by redundancy. This means that once any individual part of the system inevitably fails, something else can functionally compensate

In that sense, functional redundancy is an indicator and a feature of biological robustness.

The brain is a biological system that evolved over hundreds of millions of years. As part of this evolutionary struggle for survival, only brain architectures which were somewhat robust in performing core functions necessary for organism survival could withstand the tests of time.

Note: We have to use the unspecific term “brain architecture” to describe the very distinct, compartmentalized three-dimensional structure/arrangement of neuron/glia cell positioning which has known and unknown functional implications on how your brains work.

To phrase it differently, if certain brain functions would be easily lost or prone to become dysfunctional, it is highly unlikely that these brain architectures would have persisted over millions of years. Since robustness is an important evolutionary parameter for survival and biological robustness is often increased by redundancy, our brains are very likely redundant in certain aspects.

So do we have some experimental evidence that can be explained by redundancy?

Not surprisingly, the earliest and most conclusive evidence for redundancy comes from the study of brain damage and recovery, specifically stroke patients or accidents involving severe head injury.

It is well documented that having a stroke (temporary loss of oxygen supply caused by a clogged artery leading to cell death) can cause death or permanent functional impairment. Yet remarkably, 10–25% of stroke survivors are able to make a full recovery. Since strokes cause localized damage to a brain area serving a specific function, we would expect this function to be lost forever. Yet what we observe in recovering stroke patients is that the respective function (speech/motor control/facial recognition) can be regained, usually by some other brain region compensating. The reverse is observed by damages to our sensory input, for example amputation of an arm or loss of vision. In this case, the intact brain regions responsible motor control or vision have nothing to process, so they are taken over by nearby brain regions that do. This process is called cortical remapping. It is part of a field of research called neuroplasticity, an umbrella term for observations of how a brain can change its function over time.

If you ever wondered how blind people seem to be able to hear, sense and smell so much better than we do, part of it has to do with having more processing power available because their brain regions for vision processing have been repurposed to support other sensory information processing.

This repurposing capability underscores our broader term of redundancy (some scientists used the more specialized term degeneracy), the ability of structurally different elements to functionally compensate for a failing part.

How can technology make use of redundancy?

Imagine that we could build/grow brain tissue from scratch. Imagine further that we could connect it to our biological brain. Suddenly our brains would have more processing power without any predetermined functional role. Since we have observed in blind patients that once their visual processing brain regions lose functional utility, they can be repurposed to support other senses; thus it is conceivable that something similar would happen to our freshly connected additional brain regions. They might get taken over by proximal regions to perform processing.

While we are clearly in the realm of science fiction with this idea, scientists have already started working on something that is called a “neural lace”, an artificial microelectronic mesh that would allow us to connect our brain with external computing device.

The concept behind a neural lace is a wireless mesh that can serve as an interface between an ultra-fine computing device implanted into the brain and the brain’s biological circuitry.

Shematic of a “neural lace” microelectronic device which can be injected with a syringe directly into mouse brains. Liu J. et al. Nature Nanotechnology, 2015 and Fu TM. et al, Nature Methods, 2016.

[…] I think the actual interface to the brain is so crude today, and it relies a lot on the power of the computing or signal analysis outside of the brain. What we’re trying to do is make an electronic circuit that can communicate neurally — […] a neural lace — and, even though it’s a man-made structure, looks to the biological system the same as the natural network. — Charles Leiber

So far, Leiber’s research group could show that we can produce a microelectronic mesh fine enough to be injectable directly to brains and that mice could live with a neural lace without any adverse effects to survival. Furthermore, the neural lace got engulfed by biological tissue over time, becoming part of the mouse brain, while still being able to record electrochemical signals produced by nearby neurons. Yet, we are still far from understanding the complexity of the recorded patterns, nor are we able to stimulate neurons specifically with the neural lace.

Going back to theory, a neural lace would allow us to expand our brain, gaining additional brain regions or processing power that can be repurposed by our biological brain to do more computation for our mind.

Even if our consciousness is not able to make use of more “processing power”, implanting a neural lace coupled to a computing device could still be enormously beneficial for patients who lost brain regions because of stroke or injury. It would be equivalent to a dialysis machine compensating for liver function, or a pace maker to compensate for a failing heart, or a prosthetic limb as replacement for a lost one.

So far, we are not there yet. There will be many problems scientists will have to solve in decades to come. But one thing became sure recently: It is not impossible.

To sum up:

Pattern recognition is a prominent mode of action for how our brain processes information. Furthermore, pattern recognition can be taught to machines to help them understand us.

Generality describes our brains capacity to process varied information and transform it into a shape of representation our consciousness can understand. The source of information is arbitrary, it can be natural or artificial, thus making it machine-communicable.

Redundancy is a powerful way to increase robustness in biological systems. If we can increase the robustness of our brains by adding redundancy, we can expect a desirable increase in survival for the very thing that makes us who we are.

Finally, BCIs using these principles are already improving the quality of life for the disabled and the dreamers. Soon, BCIs will be able to make us smarter, deepen our senses or boost our memory as well. And who knows what else?

In any case, brain-computer interfaces are here to stay.

This finishes our exploration into the science behind brain-computer interfaces. I hope this series has been informative, fascinating and thought-provoking, it surely has been for me. Predicting the future of new technologies is hard, because we tend to overestimate the short-term capabilities while underestimating the long term potential. We here at AdBioS will definitely keep an eye on BCIs.

This story is part of advances in biological sciences, a science communication platform that aims to explain ground-breaking science in the field of biology, medicine, biotechnology, neuroscience and genetics to literally everyone. Scientific understanding has too much barriers, let’s break them down!

Do you have any questions or inquiries? Why not ask a scientist?

You can also help us to improve by giving feedback. Your voice matters.

--

--

Philipp Markolin
Advances in biological science

Science holds the keys to a world full of beauty and possibilities. I usually try something new.