Extracting Useful Information from Brain Signals using Deep Learning

Akshay Sadanandan
wpihci
Published in
3 min readApr 30, 2020

With the massive amount of research being done in the field of deep learning, it is no surprise to see its roots spread across various application domains such as computer vision, automatic speech recognition, natural language processing, and bioinformatics where it produces state-of-the-art results on various tasks. The field of cognitive neuroscience, however, is just beginning to see progress in this regard. A possible reason for this is that deep learning methods require an abundance of data to produce good results, something that is hard to come by in the field of cognitive neuroscience. The type of data in question is usually in the form of either electroencephalography (EEG), functional near-infrared spectroscopy (fNIRS), or functional magnetic resonance imaging (fMRI) recordings. The acquisition of large amounts of such data can be very expensive and time-consuming.

Bearing this in mind, Stober et al. propose various methods to extract important features from EEG recordings to make maximal use of available data in their paper titled “Deep Feature Learning for EEG recordings”. For their experiments, they make use of the publicly available OpenMIIR dataset, which is a collection of EEG recordings taken during music perception and imagination. Their experiments aimed to use these recordings to classify the song that was perceived or imagined.

Problems with Brain Data

Working with EEG data poses several challenges. Brain waves recorded in the EEG have a very low signal-to-noise ratio, which is a fancy term for the “quality” of the data, and this noise can come from a variety of sources. For instance, the sensitive recording equipment can easily pick up electrical noise from the surroundings. Other unwanted electrical noise can come from muscle activity, eye movements, or blinks. Only certain brain activity is of interest, and this signal needs to be separated from background activities. Hence, to identify the relevant portion of the signal, sophisticated analysis techniques are required that should also take into account temporal information.

The Bright Idea

One such technique proposed in the paper is called Similarity-Constraint Encoding (SCE). Let’s say there are 2 songs you want to classify from the EEG recordings. Each song has a set of features (i.e EEG signals) that can be used to distinguish it from the other. SCE aims to identify these features.

The workflow of a Similarity-Constraint Encoder

Pictured above is the workflow of the proposed Similarity-Constraint Encoder. To differentiate between the EEG recordings of 2 songs, SCE samples three recordings of which two are of the same song and one is of the other. Let us assume that the samples are named A, B, and C. The samples B and C are compared with A to see which one of the two is most similar to A. The system then declares the sample most resembling A as the output. During this process, the system identifies features that can be used to identify each song.

What’s in store for the future?

Further research along this route can lead to some amazing results. For instance, it could lead to applications that can instantly guess the song you are thinking of. Ever had a tune stuck in your head, but you just can’t remember the name? Well, this could very well be a solution to that problem. Think Shazam, but for the mind!

Conclusion

In conclusion, Stober et al make valuable contributions to the joint field of cognitive neuroscience and deep learning research. The most exciting of which is the Similarity-Constraint Encoder discussed above. They came up with an effective way to use the scarcely available EEG recordings to solve the real-world problem of classifying songs from brain signals.

Citation:
S. Stober, A. Sternin, A. M. Owen, and J. A. Grahn, “Deep Feature Learning for EEG Recordings” arXiv:1511.04306v4, pp. 1–24, 2016.

--

--