How can BCIs transform the lives of disabled people?

Youssef Kusibati
12 min readOct 25, 2023

--

Let me tell you Adam’s story…

He is a man in his 60s who was always interested in technology from a very young age, from the 70s era. Suddenly, he and his family started noticing some weird changes in his behaviors: clumsiness when gripping objects, tripping frequently, and broken speech. They didn’t know what was happening, but they started getting worried and went to their family doctor who immediately told them to go to a neurologist.

Unfortunately, after multiple tests, the neurologist delivers the news. Adam was diagnosed with amyotrophic lateral sclerosis or ALS: a progressive neurodegenerative disease, which means the symptoms become worse over time, It affects nerve cells in the brain and spinal cord causing muscle weakness and atrophy leading to communication, mobility, and cognitive impairments.

A hypothetical image of Adam at the neurologist

Sad 😔 … but Adam had hope with technology!

Adam’s childhood was filled with brilliant technological inventions, most of which caught his attention at the time; he remembered the times when he used his cassette player to listen to his favorite rock band and the great excitement surrounding the cell phone and the floppy disk.

However, there was another invention that might change his life and help him through his new disability. A researcher named Jacques Vida coined the term BCI or “brain-computer interfaces,” a technology with the premise of allowing us to control computers with simply…. a thought.

This premise might seem like gimmicky sci-fi cool technology, but this invention would change the lives of Adam and millions of other people with different disabilities.

Legendary physicist Stephen Hawking used a BCI to communicate with the world after getting ALS.

But… How will that help Adam?

Due to his disability, Adam will experience worse communication, mobility, and cognitive abilities over time due to the damaged nerve cells or neurons in his brain and spinal cord cutting or weakening the connection between the brain and the body’s muscles.

Imagine neurons as a postman who delivers messages and letters to different people in the city. Similarly, neurons deliver messages from the brain to different muscles in the human body which are essential for bodily functions.

Comparing neurons with postmen

Thus, BCIs were their go-to device to help Adam live a full, productive life.

Brain-computer interfaces will act as the alternative route to neurons for the brain signals required to perform different vital functions of Adam’s daily life.

They come in many different types and forms suiting different purposes. For Adam, communication and speaking with his family and loved ones was his priority.

What actually happened to Adam?

During Adam’s service in the army, he was put in circumstances where he was in direct contact with pesticides and herbicides, prominently dichlorodiphenyltrichloroethane (DDT), which was banned for a long time now due to its harmful effects on human health and the environment. DDT’s were a huge contributing cause for Adam’s diagnosis of ALS.

What… How would they cause ALS!?

As I mentioned earlier, amyotrophic lateral sclerosis or ALS is caused due to damaged nerve cells in the brain or spinal cord, but how does it happen?

In Adam’s scenario, DDTs could have contributed to Adam’s ALS in two ways: causing genetic mutations that lead to the factors causing ALS or directly leading to those factors.

Imagine neurons as cops at an event. When everyone is passing without any trouble, the cops are doing great. However, whenever many people start behaving abnormally or whenever too many people than they can handle, the cops start getting startled and don’t do their job.

Similarly, imagine proteins as those people; when those proteins don’t fold properly they won’t be able to function properly causing them to accumulate in different cells including neurons disrupting their function.

Misfolded proteins could lead to various types of diseases including what Adam is experiencing, ALS.

What DDTs do is that they either cause mutations in the genes responsible for folding proteins like HSPA5 because DDTs are genetoxins which means that they could destroy the DNA resulting in protein misfolds, for instance, DDT can cause the addition of a methyl group to the DNA in a process called alkylation leading to mutations and altering the structure of DNA, or bind to proteins while folding disrupting their normal structure.

Now, imagine a neuron as a mobile phone: if the battery, the phone’s powerhouse, of this phone starts breaking down, the whole phone won’t function properly.

Mitochondrial dysfunction, which is the damage of the cell’s powerhouse, the mitochondria, could also lead to ALS and might be caused due to Adam’s direct contact with DDTs.

For example, DDTs could bind to important enzymes which are vital in the electron transport chain, a series of chemical reactions to produce energy, in the mitochondria.

When the mitochondria in neurons isn’t functioning properly, they could produce byproducts that cause damage to the neuron.

Different effects of Mitochondrial Dysfunction on human cells; click here to read more about it.

Other factors might lead to Adam’s ALS like excitotoxicity and neuroinflammation caused either genetically or directly by DDTs by causing damage to neurons either structurally, genetically, functionally, or epigenetically.

Okay… There was damage, but to which neurons exactly?

First, let’s compare the brain to a big company: each department is responsible for doing something specifically. The department in the brain primarily responsible for speech is the primary motor cortex located in the precentral gyrus, which is a “department” responsible for voluntary muscle movement like the movement of muscles to produce speech.

The speech signals are then transmitted from the primary motor cortex to different parts of the brain involved in speech production:

  • Broca’s area: the production of language
  • Wernicke’s area: the comprehension of language
  • The supplementary motor area: the coordination of complex movements
  • The basal ganglia: the initiation and control of movement
Figure showing: the primary motor cortex, Broca’s area, and Wernicke’s area

In ALS speech impairment, the neuron damage mainly occurs to either:

Upper motor neurons send the signals they receive from the precentral gyrus to the lower motor neurons.

Lower motor neurons are responsible for transmitting the signals they receive from the upper motor neurons to the muscles.

Adam’s speech impairment was caused by damage to a specific type of lower motor neurons called bulbar neurons. Those neurons transmit the signals they receive to the muscles involved in speech like jaw muscles, lips, tongue, etc…

Figure showing how lower motor neurons deliver signals to different muscles.

BCIs overcoming speech disabilities…

When exploring their options, Adam and his family found out that they’d use a non-invasive BCI, where the device would be worn rather than implanted in Adam’s head through surgery. A non-invasive BCI is currently the most used, safest, and researched type of BCI; however, it isn’t the best in regards to the resolution of brain signals because it isn’t that close to the brain.

Wait… what does resolution mean in this context; is it like videos?

Resolution in BCIs is split into two types: spatial resolution which is the ability to detect in which parts of the brain the changes in its activity occur and temporal resolution which indicates how fast can the BCI measure the change in brain activity.

(📈Closeness to the brain = 📈Resolution = 📈Invasiveness = 📈Risk)

There are many types of non-invasive BCIs like the “old” P300. However, they chose one of the most promising types of BCIs, that is, motor imagery-based BCI (MI BCI). Simply put, Adam will wear an EEG cap or device that records the bioelectrical signals of the brain when the patient is imagining the movement of any part of his body like his mouth which will then be translated to the words he wants to say.

The patient wearing an EEG cap.

“Okay… how do they work though?”

Let’s picture Adam’s body as a symphony orchestra, and the MI BCI as its conductor. Additionally, let’s imagine that the bioelectrical signals, which are electrical signals generated by all of our cells, mainly from the complex self-regulatory system a network of regions and circuits in the brain that work together to regulate our thoughts, emotions, behaviors, adapting to change, etc, are the hand movements and facial expressions of the conductor used to communicate with the musicians as the bioelectrical signals regulate and enables the communication between the different cells and organs in our bodies.

An image of the brain showing the complex system is mentioned. It consists of different regions in the brain like the hippocampus, prefrontal cortex, amygdala, and striatum.

The conductor uses his ears to ensure that the orchestra is playing well. Similarly, an MI BCI an EEG device using electrodes, a metal, or a plastic with a conductive gel, identifies those signals by measuring the change in electrical potential, the differences in electrical charge inside and outside a cell or organ, due to the movement of ions (charged particles) across cell membranes, which is essential for cell communication, muscle contraction, etc… Afterwards, the EEG device amplifies those signals, which are represented as waves, and displays them on a computer screen.

A sample image showing brain signals presented on a computer screen.

This is only the first step in any BCI system: signal acquisition. What is next?

An orchestra coordinator might hear some noise from the crowd, for example. Thus, he will need to focus only on the music coming from the orchestra to be able to guide them. Similarly, a BCI should remove noise from the signals it measures in a process called signal preprocessing where external electrical noises like cardiac signals are removed.

Till now, the device Adam is wearing only has information in the form of waves, what should a BCI do next?

After focusing on what is played, the conductor identifies and then “extracts” information about the tempo, dynamics, and articulation of the music. In Adam’s MI BCI, the next step is to extract certain features from the bioelectrical signals captured to classify those signals into different words.

There are a lot of features that could be extracted, so let’s be specific and see which features are essential to help Adam “speak again!”

Features extracted will be related to different domains like time, frequency, time-frequency, spatial, and connectivity.

Wait… what do those domains mean?

  • Time-domain: measures of the change in amplitude and variability of the brain signals over time.
  • Frequency-domain: how the brain signals are measured to fit in every brain wave frequency (number of times per given period of time, unit Hertz, Hz) range: delta (<4 hz), theta (4–8 Hz), Alpha (8–12 Hz), Beta (13–30 Hz), and Gamma (>30 Hz).
Graph showing different brain signal frequencies.
  • Time-frequency domain: how much energy a brain signal carries in relation to time and frequency.
  • Spatial domain: how much energy of each brain signal corresponds to which part of the brain.
  • Connectivity: representing signals as connections between different parts of the brain.

The features we will be looking for are those associated with motor imagery, so let’s list two of them and associate them with their respective domains:

  • Sensorimotor rhythms (SMRs) are based on signals read from the sensorimotor region in the brain, which is responsible for processing and integrating sensory and motor commands. As the name suggests, it is a rhythm, so the extracted feature will be the changes in the sensorimotor region signals. The frequency of those signals lies in the alpha and beta frequencies. Thus, this feature would help us identify which words did Adam say, and it is under the frequency domain.
  • Common spatial patterns (CSP) associate different brain signals with different parts of the brain producing features that would help differentiate between different tasks. When Adam, for example, says hello, the CSP uses something called a spatial filter to emphasize the features of the signals that differentiate saying “hello” from saying “bye,” for example, and de-emphasize what makes them similar. The features it might emphasize and de-emphasize could be the amplitude and frequency of the brain signals measured. Additionally, it could identify which regions of the brain are involved by identifying the synchronization of neurons between two different brain regions when Adam imagines saying a specific word.
Different common spatial patterns might be extracted as features of the brain signals.

All of those features are essential in the next step of the BCI system, classification

Feature extraction is an important step in improving the efficiency and accuracy of classification. Picture classification is the coordinator’s hand movement and facial expressions he uses to regulate the orchestra.

Classification results in knowing what the brain signals obtained mean; in Adam’s case, what word he imagined saying.

In the classification stage of a BCI system, many different types of ML algorithms could be used; however, mainly supervised ML algorithms are used.

Don’t be surprised…. there are different types of supervised ML algorithms, so let’s discuss an example!

Adam’s MI BCI will use a supervised learning neural network (NN) to classify the extracted features.

Picture our supervised learning neural network as a child. We could show a child an apple and an orange, as an “input”, multiple times and tell him this is an apple or this is an orange “labeling” those different objects. After showing them to him many times, “training” him, he will be able to differentiate between the two fruits without us telling him anything; thus, “outputting” the names of the fruits after being trained.

AI-generated a picture showing a child differentiating between apples and oranges.

So… a supervised learning neural network is:

A machine learning algorithm that performs specific tasks by being trained on a labeled data set.

In a neural network, there are three main layers: input, hidden layers, and output. Before using the neural network to interpret what words Adam intended to say, we will need to train it.

For example, to train it, we will record somebody’s brain activity while saying specific words like “hello.” Then, it will go through the steps we discussed earlier: signal acquisition, pre-processing, and feature extraction. We will collect the features extracted in something called a training dataset, and in this dataset, we will label those extracted features as features associated with saying “hello,” and those labels will be the output of the NN when we are training it.

Here, the “magic” happens…

The neural network’s hidden layer will find complex mathematical relationships between the input extracted features like the SMRs and CSP, and how they correspond to the word “hello” or other words. We will do that multiple times for multiple words to train the NN.

Before implementing it in Adam’s MI BCI, it needs to be tested by giving it unlabeled extracted features, and checking whether it correctly identifies the words being said.

Finally, the identified words would be transferred to a speech synthesizer producing the sounds of the intended words, as the output.

“Adam would take forever before speaking a word!!” Is this true though?

First, I need to clarify that till now MI BCIs for speech recognition are still under development and research, and the use of this type in speech recognition is faced with many challenges like the spontaneous nature of EEG signals used in motor imagery; however, it is still a technique with great potential to help people with speech impairment, so based on some research, it would take the MI BCI from millisecond to few second to process the information and outputs speech.

That was a lot!! Here is a tl;dr:

Adam plans to use a motor-imagery BCI to help him overcome his speech impairment. He simply imagines saying the word and the BCI goes through five steps within a couple of milliseconds to a few seconds to recognize and say the word he imagined saying out loud.

The five steps are:

Signal acquisition: obtaining the brain activity signals.

Preprocessing: cleaning the brain signals and preparing them for the next step.

Feature extraction: obtaining important features from the signals to help identify what each signal means.

Classification: uses ML to identify action or task intended.

Output: sending it to the speech synthesizer that produces the sound of the word he wanted to say.

An image summarising the BCI system (Feature translation corresponds to feature classification).

Adam was able to obtain this device; however, his life is still drastically different!!

BCIs can also help Adam overcome his other impairments caused by ALS! Believe it or not, using the same type of BCI I described brain-computer earlier, motor imagery-based BCIs, he could control his wheelchair, an exoskeleton he wears, or any other type of assistive technology to move around independently.

Video demonstrating a BCI-controlled exoskeleton.

In the future, he could use more invasive BCIs either semi-invasive (implanted in the brain’s skull but rests outside of it) like Corticom or the more futuristic fully invasive Neuralink, Elon Musk’s company, which has been finally approved for clinical testing.

ALS patients like Adam aren’t the only ones who could benefit from BCIs to improve their lives. People with different impairments due to different causes could benefit from them like people with Autism to improve their communication, paralyzed people, visually impaired individuals, etc…

Brain-computer interfaces are gaining more popularity and use in multiple fields not only health, and it will have a huge impact on humanity going forward.

--

--

Youssef Kusibati

A highly-ambitious tech enthusiast with projects in the domains of artificial intelligence, brain-computer interfaces, and virtual reality.