How to Compare BCI Models?

The best way to benchmark your machine learning for EEG decoding

NeuroTechX Content Lab
NeuroTechX Content Lab
6 min readJun 22, 2023

--

Machine Learning for BCI

Building good models for translating brain signals is a HARD TASK! Models are essentially abstractions from reality, directly linked to scientific hypotheses. In the context of brain signals, the models are designed to extract features from temporal signals to establish a connection between the brain and machines. To achieve progress in artificial intelligence, it’s crucial to continuously test new models, especially in the Brain-Computer Interface (BCI) context.

Unlike other application areas, such as text and images, brain data has many peculiarities. Your brain signals can differ significantly from another person’s signals; each brain is unique. In addition, BCI models have unique challenges because brain data is highly individualistic and can vary significantly based on the specific task being performed. For instance, brain signals during rest differ entirely from those generated during an imaginary task. As a result, it is tough to develop models that perform consistently across all paradigms.

Researchers must be clear about their methodological choices to ensure that a BCI model is effective. For example, they must decide whether to train either:

To address these challenges, you can use the MOABB library, a NeuroTechX open-source software, which is specifically designed to evaluate brain decoding models.

MOABB

Like many other open source projects, MOABB started with the journey of a doctorate student. Alexandre Barachant was finishing his PhD in the French Alps and had developed a novel machine-learning processing approach based on Riemannian geometry. Applied to BCIs, this geometrical approach outperformed all known algorithms. However, demonstrating this within a publication proved to be an almost impossible task. Existing results in the scientific literature were difficult or impossible to reproduce, a major challenege known as the reproducibility crisis. Alexandre Barachant opted to demonstrate the effectiveness of Riemannian BCI by participating in several data competitions - he ended up winning every single one of them.

The lack of reproducible results in the literature established the need for a complete and sound benchmark for BCI models: a fair evaluation of the algorithms that have been published. With the support of the NeuroTechX community, Vinay Jayaram and Alexandre Barachant started the MOABB code repository and published a paper to document their evaluation.

The objective is that all papers claiming to improve state-of-the-art BCI could use MOABB to support their results. While Alexandre and Vinay joined CTRL-Labs (which later became Meta’s Reality Labs), the MOABB code base continued to be improved to support more BCI datasets by Sylvain Chevallier and Pedro L. C. Rodrigues. Professor Chevallier took the lead on the project, and with continuous support from NeuroTechX, many global contributors shared a common enthusiasm for this benchmarking tool. The library is also kept for the two PhD Students, Bruno Aristimunha, Igor Carrara, Sara Sedlar, and Sylvain Chevallier.

The Latest Release

The latest release of the MOABB library is packed with new features and improvements that make benchmarking machine learning models easier and more effective. These include:

Documentation enhancement

A significant improvement in the latest release is the enhanced documentation of the MOABB library. The new documentation includes tutorials, examples, and a re-organized content structure to cater to beginners and provide user-friendly pathways. With these additions, users can now effectively utilize the library and leverage its capabilities.

Benchmarking *everything*

To simplify the benchmarking process, we have made improvements to the MOABB benchmarking script. Now, users can easily run various evaluations on all models and datasets. Whether it is training an individual model for each subject or a general model that can accommodate multiple subjects, MOABB supports Motor Imagery, P300, and SSVEP paradigms. We’d like to thank our open-source contributor, Divyesh Narayanan, for his invaluable contributions.

Grid Search capability

We are pleased to announce the enhanced compatibility of hyperparameter search with Grid Search during EEG Decoding implemented by Igor Carrara. This improvement allows users to explore a range of parameters and identify the optimal combination for their machine-learning models. The best model can be automatically saved for future use. An example of a Pipeline with GridSearch is provided to illustrate this feature.

Deep learning state-of-the-art

In this release, we have incorporated state-of-the-art deep learning models with two different back-ends: Pytorch and Tensorflow. Our TensorFlow implementation leverages the latest advancements in deep learning models, while our integration with braindecode offers compatibility with scikit-learn. This new feature is the result of efforts between Igor Carrara and Bruno Aristimunha.

Machine learning state-of-the-art

To challenge and engage PhD students — and to make life as a neuro enthusiast easier - we have implemented and revised our machine learning models for all paradigms. These models represent the state of the art and offer a benchmark for comparison. Notable models include Augmentation Covariance Matrix, CCA, TRCA, and MsetCCA. We express our gratitude to Emmanuel K. Kalunga and Sylvain Chevallier for their contributions in this area.

Code Carbon integration

In addition to performance metrics, we have introduced an evaluation of the carbon footprint produced by each model. This consideration aligns with our commitment to sustainable practices. We thank Sylvain Chevallier and Igor Carrara for their efforts in integrating Code Carbon into MOABB.

Expanded Dataset Collection

Furthermore, we have significantly expanded the library’s dataset collection. This release introduces new P300 datasets and improves existing ones, enhancing the versatility and richness of available data. We extend our appreciation to Grégoire Cattan and Pedro L. C. Rodrigues for their contributions in this regard.

For more technical details, check our recent Twitter thread or GitHub. We extend our gratitude to the open-source community, especially Divyesh Narayanan, Robin Schirrmeister, Jan Sosulski, Pierre Guetschel, danidask, Yosider, Grégoire Cattan and Pedro L. C. Rodrigues, Emmanuel K. Kalunga, and Quentin Barthélemy for their valuable contributions.

If you are developing brain decoding models with EEG data, we invite you to use MOABB and contribute via GitHub, or reach out to one of the core team to propose something new for the library!

Written by Bruno Aristimunha, edited by Emily Dinh and Muhammad Ali, with artwork by Lars Olsen.

Bruno Aristimunha is a PhD student in machine learning, deep learning and electrophysiological signal processing, and have experience with open-source libraries for brain-signal, MOABB and braindecode.

Emily Dinh is a data specialist who works in the medical device industry and is part of a computational cognitive neuroscience lab. She is currently obtaining her MS in Artificial Intelligence.

Muhammad Ali Haidar is a PhD student working on the origin of individuality at the Freie Universität Berlin. His neuroethology focus is to deciphere the differences in the neuronal circuitry involved in sleep cycle and memory.

Lars Olsen is a regulatory medical writer. He works in the pharmaceutical industry writing submission-level documents, and has additional experience with medical devices and pharmacovigilance.

--

--

NeuroTechX Content Lab
NeuroTechX Content Lab

NeuroTechX is a non-profit whose mission is to build a strong global neurotechnology community by providing key resources and learning opportunities.