Reading the Brain with Machine Learning

Tony Tonev
Analytics Vidhya
Published in
5 min readMar 28, 2020

--

In my previous post I talked about using a portable EEG device to detect Event Related Potentials (ERP’s) in the brain. Specifically, I was able to detect a Reward Positivity (RewP) signal after a puzzle was solved correctly. I did this by graphing the signal immediately after the event and comparing it with the average RewP signal from this paper. Using my human brain’s visual pattern recognition, I confirmed that I was getting the same pattern. Wouldn’t it be interesting to train a machine learning model to recognize the same pattern so we can monitor these events automatically. That way these ERP’s could trigger an event based API, and we can write code that is driven by various brain states. To this end I trained an RNN to detect RewP.

This problem reminded me of trigger word detection, where you train an AI to listen to audio input and wake up when it hears “Hey Siri” or “OK Google” for example. There’s a project which does just that in the deeplearning.ai class Sequence Models, so I decided to use their approach as a starting point and tweak it from there. The only difference is, instead of audio data, we’re looking at EEG data, and instead of a specific word, we’re looking for patterns of activity, but otherwise the problems are similar in that they both deal with 1 dimensional time series data.

The Data

The data I’m starting with is quite limited, so I don’t expect outstanding results, but I figure it’s better to start and make a first iteration of a model, and I can always add more/better data later. I collected 12 EEG recordings from 8 people (including myself) while doing some kind of quiz such as finding countries on an unlabeled map of Europe or chess puzzles, where they get instant feedback if their answer was right or wrong. I also recorded timestamps of mouse clicks to know exactly when feedback was given, and I recorded the screen so later I can tell if each answer was correct or not.

Preprocessing

Once I aggregated the data, I clipped out 1 second of EEG data (average of electrodes TP9 and TP10) immediately after the mouse was released for each answer they gave correctly. This is where I expect to find the RewP ERP. In some cases, the RewP signals was there and in others it wasn’t. My research tells me that the signal only appears if the subject really cares about the outcome (whether or not they got it right), so it’s possible some of my subject simply didn’t care enough on some of the questions and so the RewP didn’t show up. To increase my chances of success, I manually filtered the cases where it looked to me like RewP was present and set them aside. In some cases, Muse would temporarily lose signal, probably because of bad connection on the electrodes and in those cases it would record NaN. I didn’t use any recordings with more than a few percent NaN’s total. To deal with the rest of the missing data, I replaced the NaN’s with the average value for the recording. Next, I passed the signals through a dual pass Butterworth filter with a passband of 0.1 Hz to 30 Hz just like they did in this paper that I originally replicated to cut out noise.

Synthetic Data Generation

Following the example of trigger word detection, they created synthetic training data where the target word was recorded with no background noise, and then noise was separately recorded from various locations such as noisy cafes, traffic, etc. Then the trigger word was inserted at random locations in a short clip of noise, so that the exact timestamp of the signal is known for ground truth. I did the same thing. For “background noise” I used EEG recordings of meditation sessions. My reasoning is that it’s unlikely RewP would appear during meditation, since the subject is not doing any task where feedback could be provided. This brain activity is probably not representative of “normal” brain activity, but it’s what I have, and I have to start somewhere. I inserted the RewP between zero and 4 times randomly in 10 second clips of meditation EEG recordings. One step I took out is turning the signal into a spectrograph of the different frequencies (essentially a Fourier Transform). Unlike for audio data, I think the EEG pattern can more easily be recognized in the raw format. Ground truth activation (Y) is a vector with either a 0 or 1 for each time step. The vectors starts with all 0’s and then immediately after the end of the RewP signals, it is set to 1’s for a short period.

Top: EEG signal, Bottom: Ground truth (Y) for training

The Model

As a starting point, I used the same model architecture as trigger word detection project whose example I’m following.

It uses a 1D convolutional layer, followed by 2 layers of GRU’s, and finally a Dense (fully connected neural network). It’s implemented in Keras with Tensorflow. Since the stride fo the convolution is 4, it outputs 1/4th fewer values that it takes in. In other words, it’s making 1 prediction per 4 input time steps. I trained this overnight on my laptop.

Results

I made new EEG recordings while the subject did the same type of task to use as a validation set. Given the limited training data, it’s no surprise that there is room for improvement, but here is one example. In the graph below, the ground truth is on the top, and predictions are on the bottom. If the value reaches above 0.5, we consider it activated, so as you can see for the first ERP, it gets it right, the next one it’s a bit early, and the last one it’s a bit late.

Top: Ground truth, Bottom: Predicted

There’s a lot of work still to be done to improve this to be usable, but so far the results are encouraging. If this interests you or you’re working on something similar I would love to chat and possibly collaborate. Please comment or reach out to me directly: tony@tonythinks.com

--

--