BCI — Beyond Neural Signals

NeuroTechX Content Lab
NeuroTechX Content Lab
7 min readFeb 14, 2024

--

Picture yourself at a table, enjoying a meal with some friends.

You casually reach for a glass, your hand effortlessly grabs it, and you take a sip.

This seemingly simple movement relies on your brain’s ability to coordinate the action of up to 50 muscles in your arm and hand with exquisite precision.

Brain-computer interface (BCI) technology promises to restore the ability to take such simple yet important actions for those that have lost the ability to move due to injury or disease. By measuring and analyzing the brain activity patterns that would normally orchestrate the arm’s movement, BCIs can instruct a computer or robot to enact the user’s intent. Yet, despite recent advances, BCIs are not yet able to reproduce the effortless elegance of movements of able-bodied individuals. This might soon change — once BCI technology is integrated with other biophysical signals.

“Some of the bottlenecks faced by BCI technology can be bypassed to deliver swift benefits to those in need”

Challenges to overcome

Effective BCI applications must overcome three hurdles. First, they must measure brain activity that reflects the user’s intentions (access). Next, this signal must be analyzed to determine what the user’s intention was (interpretation). Finally, the appropriate command must be sent to the computer enacting the BCI user’s will (execution).

Bottlenecks, limitations and inaccuracies at any of these stages greatly hampers the ability of a BCI to support complex applications, like mimicking the natural movements of the human arm. Pioneers in BCI technology are grappling with these challenges and investing considerable efforts to enhance access to neural signals, refine interpretation algorithms, and develop precise execution systems. While advancements are underway, the path to a flawless BCI experience is still unfolding. There is, however, an exciting possibility to expedite this process: integrating BCI with other existing data-rich technology such as computer vision, eye tracking or electrocardiogram (ECG). As such, some of the bottlenecks faced by BCI technology can be bypassed to deliver swift benefits to those in need.

Let’s clarify these hurdles by use of an analogy: The challenges of access, interpretation, and execution for BCIs parallel those encountered in GPS navigation. GPS relies on connecting with a sufficient number of satellites to determine your location on Earth — access. A digital map then provides limited route information — a drawing, some text — akin to the neural signals measured via BCI. To progress towards a target destination, a traveller must interpret these signals to determine where they should go and then take the appropriate action at the correct time — execution. Failure in any of these stages would prevent them from reaching their destination.

Opportunities for improvement

Even when following a map, navigating solely based on this limited information can be challenging. Most of us instinctively supplement map information by observing our environment — street names, landmarks, shops. This additional, readily accessible, and easy to understand information helps us to accurately interpret the map’s signals and make the right navigation decision.

Alternatively, a traveller could employ a self-driving car and eliminate the need for manual navigation altogether by relying on an intelligent system to handle the task instead. Either strategy makes navigating to a destination much simpler, and improves the chance of success, than relying on a map alone — whether by introducing complementary data or relying on an autonomous, intelligent, system to do the heavy lifting.

“While advancements are underway, the path to a flawless BCI experience is still unfolding”

Let’s return to the scenario of an individual using a BCI-enabled prosthetic arm. Relying solely on brain activity measurements is akin to navigating using only a map — challenging, inefficient, and risky. Happily, there are two possible strategies for improving BCI applications:

  • Firstly, we can look for non-neural physiological signals carrying intention information. This is similar to looking in our local environment for additional streams of information while driving
  • Alternatively, we can endow the BCI system with enough intelligence to plan actions autonomously — similar to the self-driving car scenario.

Eye tracking

Eye tracking offers an effective means to augment BCI applications with non-neural data. Research shows that humans often look at objects they are planning to reach for and that this visual information improves the accuracy of reaching movements.

Eye tracking offers several advantages. As a technology it’s well established and relies on inexpensive hardware. Eyes are always accessible, unlike the brain which is ensconced in the skull. By following the user’s gaze, eye tracking can identify the object a person is targeting, and thereby simplify the task of guiding, for example, a robotic arm to the target.

Additionally, such an arm could itself be endowed with cameras or other sensors to ‘perceive’ its environment and apply computer intelligence to autonomously plan movement towards the object that the user is looking at. While this is a challenging robotic problem, it is one that robotics researchers have made great progress on.

Importantly, endowing a robotic arm with sensors and intelligence could lessen the burden of control on the BCI — and thereby increase its tolerance for inaccuracy. Instead of relying on brain activity to provide all the necessary information to execute the action, leveraging robotic sensing in this scenario means the BCI needs only to specify when to reach for the target; eye tracking would specify what to reach for, and the arm’s own intelligence would handle how to execute the reaching motion. The technology to make this vision a reality is available today, waiting only to be integrated into a BCI application.

Emotion signals

Similar approaches can be used to make BCIs more effective in other scenarios too. Recent work has shown great progress in making BCIs capable of reconstructing intended speech from brain activity in people who have lost the ability to communicate verbally. Communication itself relies on more than just the semantic meanings of words — subtle hints of emotion convey deeper meaning behind the words. Certain innovations aim to make reconstructed speech more expressive, however emotion identification remains a challenging task when relying on neural activity alone.

Fortunately, a number of physiological biomarkers of emotional state can be leveraged to infer the emotional content of speech, while relying on the brain for the semantic content in parallel. Pupil size, skin perspiration, and heart rate have all been shown to carry information about a person’s emotional state. Importantly, they can be easily measured today with inexpensive and readily available devices. As such, for example, a physiologic measurement system may determine that a person is angry through their perspiration and heart rate, allowing a speech-reconstruction BCI to change the volume of its generated audio, and even stress particular words or expressions.

“BCIs cannot yet deliver on the promise to fully restore crucial functions to those that suffer from neurological injury and disease”

Looking forward

In recent years, the neurotechnology sector has made great advances in a number of areas that are directly relevant to BCIs. We have safer and more reliable access to neural data; we have improved algorithms to interpret neural activity. But despite all of this, BCIs cannot yet deliver on the promise to fully restore crucial functions to those that suffer from neurological injury and disease. Perhaps, though, they can be integrated with measures of biophysical signals that can help overcome some of the challenges to effective application of BCIs. Notably, fusion of information from different modalities, such as neural activity and heart rate, is a challenging machine learning problem.

Engineers are also becoming increasingly adept in crafting intelligent systems capable of executing complex actions to achieve goals set by their human users. Endowing systems designed to executed BCI commands with similar intelligence would reduce the requirements placed on BCI platforms. But many obstacles stand in the way of making the integration of multiple sensors and intelligent systems a reality. Each additional sensor adds a complication for the user who needs to ensure fully functionality at all times. Furthermore, endowing BCI systems with sensors and the intelligence needed to plan actions autonomously requires a significant amount of computational power — likely exceeding what’s currently available in portable, battery powered devices.

These challenges notwithstanding, BCIs stand to benefit by integrating multi-modal and intelligent systems to leverage the strengths of each and compensate for the limitation of current neurotechnology. We can make the BCIs of the future happen, today.

Written by Federico Claudi, edited by Muhammad Ali Haidar and Simon Geukes with AI-generated artwork prompt-engineered by Sophie Valentine.

Federico Claudi is a Postdoc Associate at MIT. His research focuses on engineering novel machine learning architectures for BCI decoding.

Muhammad Ali Haidar is a PhD student working on the origin of individuality at the Freie Universität Berlin. His focus is deciphering the differences in the neuronal circuitry involved in sleep cycle and memory.

Simon Geukes works at the UMC Utrecht Brain Center. His work evolves around BCI participants, fMRI and ECoG.

Sophie Valentine has a background in experimental psychology and cognitive neuropsychology research, with degrees from Bristol University. Her work is focussed at the intersection of tech-for-good, product, digital health, and neurotechnology.

--

--

NeuroTechX Content Lab
NeuroTechX Content Lab

NeuroTechX is a non-profit whose mission is to build a strong global neurotechnology community by providing key resources and learning opportunities.