BENDR for BCI: UToronto’s BERT-Inspired DNN Training Approach Learns From Unlabelled EEG Data

Synced
SyncedReview
Published in
3 min readFeb 22, 2021

From predictive text to smart voice control, human-machine interfaces have been significantly improved in recent years. Many scientists envision the next frontier as brain-computer-interface (BCI), direct neural connections that leverage the electrical activity in brains captured via EEG (electroencephalography) signals.

In a bid to develop deep neural networks (DNNs) that can better leverage newly and publicly available massive EEG datasets for downstream BCI applications, a trio of researchers from the University of Toronto has proposed a BERT-inspired training approach as a self-supervised pretraining step for BCI/EEG DNNs.

In the paper BENDR: Using Transformers and a Contrastive Self-Supervised Learning Task to Learn From Massive Amounts of EEG Data, the researchers characterize the use of DNNs on raw EEG data for BCI applications as a challenging task, requiring both the extraction of useful features from raw sequences and the classification of these features. DNNs for BCI applications can struggle to determine good features, as EEG data typically includes a large degree of variability within and between different users, and the classification performance of different model types can vary greatly.

Large language models such as BERT can learn to reconstruct language tokens when given specific surrounding contexts, and have inspired impressive advancements in natural language processing. The UToronto researchers asked: “Could an EM [encephalography modelling] be developed in this vein, using individual samples rather than tokens (i.e., direct application of BERT to raw EEG)?” Although EEG data’s highly correlated nature would seem to hinder such adaptation, the team believed that a method for interpolation could be learned via a similar design idea.

They adapted the self-supervised speech recognition approach wav2vec 2.0, similar to masked language models like BERT, to use a self-supervised training objective to learn a compressed representation of raw EEG data signals. As Synced previously reported, wav2vec 2.0 is a powerful framework for self-supervised learning of speech representations. By encoding speech audio via a multi-layer convolutional neural network and then masking spans of the resulting latent speech representations, these can then be fed to a transformer network to build representations capturing information from the entire sequence.

The newly proposed framework closely follows that of wav2vec 2.0, where arbitrary EEG segments are encoded as a sequence of learned feature vectors dubbed BENDR (BErt-inspired Neural Data Representations). A transformer encoder maps the BENDR to new sequences that contain valuable features for targeted downstream tasks.

The researchers propose self-supervised sequence learning could be an effective approach for developing and deploying more complex DNNs in BCI, as the ability to learn from more people, sessions, and tasks using unlabelled data will enable better modelling of EEG data input distribution while also learning features with reduced variability. The team developed a single pretrained model within the framework that can model raw EEG sequences recorded with different hardware and across different subjects and downstream tasks, and propose such an approach can produce representations suited to massive amounts of unlabelled EEF data and downstream BCI applications.

The paper BENDR: Using Transformers and a Contrastive Self-Supervised Learning Task to Learn From Massive Amounts of EEG Data is on arXiv. The source code and pretrained BENDR models can be found on the project GitHub.

Reporter: Fangyu Cai | Editor: Michael Sarazen

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global