Brain Computer Interfaces: How Data Science Meets Neurotechnology

Abstract

Picture a world where mind reading is possible. Telepathy isn’t just a magical ability written in fantasy books, but an actual ability people can achieve. Imagine this as well: a place where mind control is not another piece of fantasy, but a real ability people can exercise to move things with their minds. Although telepathy and mind control are not real, the closest thing to these abilities lay in the field of neurotechnology. Neurotechnology is the conjunction of fields like engineering and neuroscience that showcase how technology can serve the purpose of solving neurological problems. One particular subsection of neurotechnology is Brain Computer Interfaces, also known as BCIs. BCIs are technologies that communicate with the brain and produce an action based on neurological data. The data can be collected using technologies such as an EEG (electroencephalography) that reads neuronal data from a participant’s scalp through electrodes. The earliest BCI invention was first tested in the 1970s and has been developing ever since. BCIs pose high hopes for providing solutions for neurological disabilities and areas in medicine (Shih et al., 2012). As well, they provide interesting possible developments of our interactions with computers.

The BCI would take an individual’s neurological data and the technology will produce an appropriate response. However, to successfully develop a BCI, there are many steps in machine learning to create the software: data cleaning, extracting certain features from the data, coding neuronal data into readable inputs for the interface, and developing an iterative algorithm that produces an appropriate output. A BCI works in these simplified steps: first, the signal is picked up from the brain. Next, several pre-processing methods such as artifact detection and removal are used to transform the neurological data into useful information to do further analysis. Afterwards, the necessary features are extracted from the cleaned data which is then classified as inputs for the connected technological device, which would use those inputs to do something. Then, the device would give feedback to the user through the individual experiencing the action or by receiving it through the interface, and the cycle starts over. Some common examples of BCIs range from moving robotic limbs to BCI spellers, an interface that allows individuals to communicate by looking at certain letters on a screen to type a message.

Figure 1: An example of how a robotic arm could look. When this arm is used in a BCI system, the arm functions as the technology that uses information from neuron recordings to then perform the correct action such as making a fist or extending the arm.

In the example of a robotic arm, the arm would take inputs from the individuals using an EEG and electrodes implanted inside the brain, resulting in a specific movement of the robotic arm (Vilela & Hochberg, 2020). The data picked up from the EEG and the electrodes will then be decoded as a command that the robotic limb needs to carry out. The rewriting of brain waves to a command is done using various principles of data cleaning and processing to make the data readable to the device. The robotic arm would take in the decoded command and perform the action, where the carried out action serves as feedback for the individual: if the action is carried out correctly or not.

The equation that simplifies the process of brain signals to output of a BCI.

BCIs process brain signals through a simple equation, y = f(x, θ), where “x” refers to the features of brain activity that are picked up (Iturrate et al., 2020). And, through the set of parameters, such as the frequency of brain activity, given as “θ”, the result (“y”) is the physical output for a BCI.

The result can be a continuous value or a command, relating to either a regression or classification problem. A continuous value is a numerical value, such as a certain angle to rotate the robotic arm. An example of a command would be a physical output, such as moving the arm. A very important step for having good neurological data is processing the data in order to only receive important information from the brain.

Recordings from the brain pick up all interpreted information, irrespective of if the data is helpful for the technological device to process. Because of this, the data needs to be filtered and cleaned to include only the information that is necessary for the task at hand. Picking up data on the individual’s breathing would not be important for the task of gathering data for coding a robotic arm, for instance. For this reason, feature extraction and selection play a big role in gathering clean data that works well for the interface. Feature extraction is a method by which signals are processed to pick up certain features of the brain signals in order to decode them into useful commands that the interface can use to perform an action (Aggarwal & Chugh, 2019). The data usually contains a time component: examining brain waves over a period of time, which in order to be analyzed, requires signal preprocessing and feature extraction.

Signal Preprocessing

Signal preprocessing serves the purpose of improving the quality of the data received. When brain readings are done through using tools like an EEG, the machine picks up necessary information for processing (the signal) and background information from the machine and all other information (noise) (Aggarwal & Chugh, 2019). Doing these processes would increase the signal-to-noise ratio, allowing for cleaner and better quality data. Signal preprocessing is usually performed before extracting features in order to get a clear picture of the data first and in some cases, this step can be skipped if the data is relatively clean (Iturrate et al., 2020). There are three general steps for preprocessing: artifact detection and removal, frequency filtering, and spatial filtering.

A. Artifact Detection and Removal

An artifact in terms of brain signals are any signals that are picked up that are not from the brain. When using devices like an EEG, the signals are collected from a number of channels that pick up brain signals from a particular region on the head. Some channels also tend to pick up information that is not relevant such as from the eyes. Because of this, an important step in signal preprocessing is removing these artifacts from the signals to make the data more usable. A common way to do this is dropping the signals that include artifacts, which is commonly done using threshold rejection and z-score artifact rejection, among the many other methods. The threshold rejection takes note of a specific threshold for the brain data and whenever the data passes the threshold, that data is not included for future use (Iturrate et. al, 2020). The z-score artifact rejection method follows the same format of threshold rejection: instead of removing data that exceeds a certain threshold, data is removed if it’s greater than a certain z-score. In the equation below, “x” is a brain recording taken from a specific channel “c”, where “μ” is the mean of the channel and “σ” is the standard deviation of the channel “c”.

Equation for z-score artifact rejection method, where the equation follows the z-score format with “x” normalized around the mean and standard deviation of the channel that the brain recording “x” is taken from.

B. Frequency Filtering

The neuronal information used for BCIs are usually in the form of brain waves, where each type of brain wave is defined by their range in frequency. There are five characteristic brain waves: alpha, beta, gamma, delta, and theta, where each brain wave is present in specific times and has its own frequency range that it falls into (Koudelkova & Strmiska, 2018). Filtering by frequency involves creating a bandpass filter that amplifies the prominence of particular brain waves while diminishing the effect of those that don’t pass the filter (Iturrate et al., 2020). Depending on the method of choice for collecting data, the filters would have different frequencies depending on the characteristic that is being focused on, such as detecting an action potential.

C. Spatial Filtering

Spatial filtering is used to remove artifacts in the brain data, thereby improving the signal-to-noise ratio for the channels of importance. This filtering is done through a linear transformation of the data across channels of the device. These filtering techniques can be differentiated into three different categories: re-referencing, data projection filters, and discriminative filters. Since most brain recordings of electrical activity are the difference from the reference node to each node from which brain activity is recorded, the quality of the data received is affected. To fix this issue, re-referencing is used to change the point of reference for each piece of data (Iturrate et al., 2020). Data projection filters involve changing the dimensions of the space where the data is represented. A common example of a data projection is Principle Component Analysis (PCA), where the data is placed on a different space that can be of a greater or smaller dimension. Including labels helps later in classification when the data can be classified properly to determine the output of the BCI (which is the “y” term y = f(x, θ)).

Feature Extraction

The next step after signal preprocessing is feature extraction, where particular features are chosen and decoded from the cleaned data so that the technological interface can interpret the input. First, the channels to examine are selected and are dependent on the type of data used. Data that is continuous (such as what activities an EEG detects) is split into two categories, one of which are temporal features. Temporal features are the time windows where the event occurs. A good example of temporal features are the time it takes for an action potential to occur, where the event occurs within specific time periods. A common practice for extracting temporal features is applying a bandpass filter where the focus is centered on the frequency of the signal (Iturrate et al., 2020). Bandpass filtering will only allow signals between certain frequencies, or a band, pass through (Cheveigné & Nelken, 2019).

After understanding the different processes of a BCI, let’s take a closer look at two case studies where BCIs are used to: differentiate between errors made by a person versus a machine and controlling a robot solely using eye movements.

Case Study: Brain Computer Interface Distinguishes Between Errors in Human-Agent Collaboration

With the rapid development of technology, it’s common to work with machines on a task. Through this collaboration, mistakes can happen and knowing who the individual distinguishes as the cause of the error will further improve the efficacy of human-machine team work. Researcher Dimova-Edeleva and colleagues devised a BCI that investigates the situation where the human could characterize an event as a mistake and would have a change in brain waves called spontaneous error-related potentials (Dimova-Edeleva et al., 2022). The study examined whether mistakes made by the human or the machine bring about different error-related potentials in the brain and investigated this through an EEG experiment where eleven subjects each did a collaborative task with an agent. The level of collaboration between the subject and the machine were categorized into two different levels. This study has examples of using both spatial filtering using the previously mentioned PCA method and frequency filtering.

Frequency and Spatial Filtering

The study filtered out low frequency drifts, noise, and muscular activity through a bandpass using a particular filter that allows only frequencies between 1 Hz and 40 Hz to pass through. For preprocessing steps, data between breaks were removed as artifacts and channels with more noise. The data resulted in having 20 features for every electrode after cleaning, where the data is on a high dimension in space. PCA was then used to reduce the dimensions of the data by selecting components that mirror the variance of the data, a form of spatial filtering.

Case Study: Brain-Computer Interface for Robot Control with Eye Artifacts

Most BCIs extract any data associated with eye movement. But in some cases, these eye movements become the signal for participants who only retain voluntary movement in their eyes. With the lack of voluntary function for other extremities, Researcher Kaan Karas and his colleagues devised a BCI that lends power to the eyes. This case study follows the situation of individuals with neurodegenerative conditions who have full function of their eyes and eyelids, but are not able to produce any other movements with other body parts (Karas et al, 2023). This condition significantly affects these individuals’ way of communicating and interacting with the world around them. However, Karas introduces a BCI that allows these individuals to control a helper robot with their eyes, thereby regaining some level of function in their lives. While some research suggests that eye movements are seen as artifacts because of the high amount of noise it contributes, the researchers have found these eye movements to be useful methods of communication for BCIs. Eye movement data was collected by using EEG. In their signal pre-processing methods, they used feature extraction through Fourier transformation and used a threshold to filter the data.

For feature extraction, the researchers used a short-time fourier transform (STFT) to determine the frequency band of the eye movements. This method is a form of feature extraction that determines the phase and frequency of sections of the brain signals over a course of time. A filter was also applied to get rid of signals that were greater than 100 Hz and less than 0.5 Hz, which is a frequency filter. After conducting the Fourier transforms of different eye movements, they calculated a threshold to reduce the variability in the data for each channel. They calculated the mean and standard deviation for each trial of their chosen channels to create an equation to calculate the threshold:

Threshold equation that Karas and team use to calculate the threshold for signals that are of importance for each channel.

M” refers to the mean and “SD” is the standard deviation, with “a” being a value chosen based on the eye movement they were investigating for each channel. This study conducted several experiments with their subject and the system to look at the effectiveness of the robot completing the predetermined actions as well as the algorithm’s efficacy of correctly classifying eye movements. Some of the actions that the robot needed to complete are: moving to a predetermined location on a table, performing a certain dance, and sending an email. The robot proved to be successful at carrying out specific actions determined by the participants’ eye movements. However, there was some difficulty in correctly classifying looking to the left. Nevertheless, this study proved that individuals who are motor impaired can in fact control a robot using their eyes, allowing these individuals to regain function in their lives.

Conclusion

BCIs present a unique solution to many individuals who lose function from neurological conditions. With inventions such as the case study of a BCI robot controlled using eye movements, these interfaces create a new place for neurotechnology in the economy. These novel solutions present a way for individuals to continue to communicate, interact, and function with the world and others. They provide a new way to study the brain as well. By understanding how to use neuronal data in conjunction with a device, more about the brain’s inner workings are revealed. BCIs can also be implemented for public use, forever changing the way we interact with devices. In the near future, BCIs can move into the medical sector by helping patients with neurological conditions. However, BCIs can also be implemented for recreational use in the example of a mind-controlled video game. The possibilities for this technological advancement are endless and in the upcoming years, more magical abilities could be possible in the real world.

--

--