Real-Time Motor Imagery Predictions with a Bedroom BCI System

A first journey into DIY Brain Computer Interfaces, part 5

Tim de Boer
Building a bedroom BCI
7 min readSep 10, 2021

--

Fig 1. Designed by rawpixel.com / Freepik

Disclaimer: This series of blog posts is written to make myself aware of the details and concepts in the field of BCIs and neuroscience, as I go through my very first own BCI project. Right now, I really have no idea how this project will turn out.

Therefore, note that this series of blog posts should not be used as a definitive guide for your own journey. Rather, I hope you will take inspiration from my first journey, and maybe not make the same mistakes as I might do ;)

Update: In the summer of 2022, I started a new series of blog posts with updated, more advanced information, and way better results than achieved in this series of blog posts. Click here to go to part 1 of that series.

Welcome to the last part of this series where I document my process of building my very first BCI bedroom project! Here, we will put our project to the test, by trying to predict the motor imagery of the subject in real-time!

In order to get an overview of the full process of my bedroom BCI project, please check out the previous parts, where I first give some background information (part 1), and then talk about collecting EEG data (part 2), pre-processing (part 3) and machine learning (part 4)!

Now we have arrived at the testing phase: doing real-time predictions. What I find interesting about BCIs, is their potential to help disabled, or paralyzed people in performing daily tasks using some sort of an exoskeleton which they control by thought, using a BCI. In order to build such a system, first the data of the EEG device should be passed to a computer, where the data can be decoded into inputs which in turn can be sent to the exoskeleton.

In this DIY BCI project, I was curious if I would be able to make a simplified version of such a system. Therefore, I will try to decode EEG signals into commands which will be used to move a simple dot on the computer screen.

Let’s see if that is possible!

The code corresponding to the project in this blog post can be found at this GitHub repository. Specifically, in this part we discuss step 5, which can be found in the Python file 5_realtime_predictions.py.

Receiving a real-time stream of EEG data

As I explained in part 2 of this series, I used the Mind Monitor app to record EEG data in a CSV file, which I then could load onto my computer. However, collecting data real-time does require live streaming to my computer. Luckily, the Mind Monitor app provides such a feature as well! If you’re interested in more details, please read part 2 where I describe how the Mind Monitor app works.

Even better, the creator of the app has also provides some example code in which he uses the live-streaming option! This is perfect, as I can use his code as a starting point for this section of my project.

Preparing the data

Now that we have a stream of data coming in, we have to decide how to store this and use it as input for my machine learning model. As I have decided in the pre-processing section to use temporal features with a window size of 3 seconds, I am now required to first collect 3 seconds of data. Therefore, I initialize an empty defaultdict from the collection library in my Python program. As the stream of data comes in at a frequency of 10 Hz, I collect 30 instances for all the 20 features of the brain wave signals.

Then, with this dictionary with 30 instances per feature, I move onto the next steps, which basically consists of going through the pipeline I have created in the earlier steps of my project:

  1. Create a Pandas DataFrame: data from a defaultdict can directly be converted to a DataFrame object. Then, I also add some timestamps, starting at a random timepoint and then increasing by 100 milliseconds (as the frequency of data is 10Hz), as index to this DataFrame to make the code I have written before compatible with this data.
  2. Outlier detection (as described in part 3): I now apply the mixture model outlier detection. I am not sure how useful this is for only 30 instances, but I decide to do it anyway. I also interpolate any values which were detected as outlier by the mixture mode and were therefore deleted.
  3. Feature engineering (still part 3): I now perform PCA and ICA on the 20 features I have, and just add the resulting features to my dataset. Then, I divide my data into windows of 1, 2 and 3 seconds and calculate the temporal features, and also calculate frequency features (but only for the 1 second window size). I then add these features as well.
  4. Preparing the data and making a prediction (part 4): As I have used a window size of 3 seconds, and my dataset contains data for 3 seconds, this means I have exactly 1 row in my dataset which has no NaN values. This is the row I am looking for. I now drop all rows with 1 or more NaN values to get the 1 row I want, convert it to a Numpy array, and then feed it to my ML model, which I already have trained in part 4.
  5. Now I have a prediction of my real-time motor imagery state over the previous 3 seconds of data! I pass this prediction to a simple MatplotLib Animation plot I have made, where a dot will move left when my ML model gave the prediction of motor imagery of my left hand, and vice versa for my right hand. If the model decided that no motor imagery is performed, nothing happens.
  6. Now, I reinitialize my defaultdict, collect again 3 seconds of data and start over!

Well, there you have it. The above 6 steps describe my simple version of a BCI system! Now, I think you are curious what the performance is..

Performance

In part 4, I showed that my best model got an F1 score of 0.58 for predicting the 3 classes I have: left, right or none. Now, this is not a very good score, but it is not bad either.

Unfortunately, the performance at the time of writing this blog post is, well, not that good. The predictions of the model, for myself as the subject, are heavily skewed to predicting left. No matter how hard I try to imagine moving my right hand, after 20 or so predictions, more than half of the predictions are left..

Conclusion

From above, we might have to conclude this project is a failure. Nevertheless, I am very proud of even getting so far as I have gotten in a couple of weeks time! Initially, I did not think I would be able to program a BCI system which would be able to do a real-time prediction. And to try to cover up my “failure”, as is written in this cool blog post about advice for building a bedroom BCI, EEG BCIs in general just don’t work that well, yet.

So, to conclude, I think my project was a success and I am motivated to learn more about the field of BCIs and maybe in a later period, be able to program a system with accurate predictions!

For now, I already have some ideas which might be useful for when I want to improve the system.

How to improve next time?

In the list below, I have tried to write down everything I would like to try, improve upon, or learn more about when I return to my BCI project.

  • Collect more data: right now, I only had 10 minutes of data from 3 subjects. If I really want to make an effort into building an accurate, general BCI system, I would need to collect more data.
  • Raw EEG data: For now, I have worked with brain wave data which was already filtered by the Muse device. For a next project, I would like to collect raw EEG signals, and start from there. I think the outlier detection and ICA makes more sense when performed on raw EEG data.
  • Feature engineering: For this project, I have engineered features which I already knew from other domains. For a next project, I would try to learn more about EEG specific feature engineering.
  • Transfer learning or deep learning: In some of the papers I have come across during this period, more complex deep learning models were applied to EEG data. Sometimes, these models would be pretrained on a large collection of EEG data, and it would therefore be interesting to use transfer learning to fine tune those models for the specific use case in my project and see how the performance is.
  • Adding some more details to the project: in general, if I would have some more time, I would like to learn about how the real-time streaming works exactly and see if improvements could be made here, I would try to build a more attractive visualization for the real-time predictions (maybe make a game out of it?) and I would work on details like how to continue running the program when the connection of 1 or more sensors would be missing (right now, the program simply pauses completely when 1 sensor loses connection).

If you read this series all the way to here, thanks you very much for sticking around, and I hope you’ve learned something and maybe start your own BCI project :)

And again, any feedback for this project and blog post series is welcome!

--

--

Tim de Boer
Building a bedroom BCI

Master graduate AI @VU Amsterdam. Currently learning and writing about building brain-computer interfaces. Support me: https://timdb877.medium.com/membership