Visualizing your Sleeping Brain

This past week at the Ars Electronica Festival in Linz, I participated in the first Br4N1.io brain-computer interface hackathon. The purpose of the hackathon was to use g.Tech products to build certain projects with a team of other qualified engineers or artists. The project of choice was the Dream Painting project, painting dreams from brain signal data captured during sleep. The team primarily consisted of students from Germany and Austria: Elisabeth, Wolfgang, Simon, Kha, and Martin. We also were mentored by a Berlin-based artist, Alex Guevara. We were to build a Dream Painting from sleep data captured from a Brain Computer Interface (BCI).

The Dreamers: Br4N1.io Hackathon Linz

Left to Right: Martin, Simon, Wolfgang, Kha, Iain, Elizabeth

The team thought the first part of the hackathon would be the easier part — simply sleep and capture brain data. Unfortunately, it turned out to be not so easy — I was late in arriving to the hackathon, coming in the middle of swapping out Matlab licenses, sensors, and debugging electrical conductivity issues in the sensor setup and data pipelines in Matlab. With the current version of the software, the best supported platform for capturing the data from the Brain Computer Interface was to use Matlab, a scientific computing environment. At the beginning of the hackathon, the entire team was working to setup Matlab and connecting the brain sensor to Matlab. I also investigated writing custom software to connect directly to the BCI interface, however, I was unable to find specifications to decode the data coming from the interface.

g.Tech BCI Capture Interface Hardware

Wolfgang, Elizabeth, and Martin also spent time working on saving the complex data sets Matlab and fixing issues with getting a consistent dataset from the sensor. This continued on until around 5pm when we were able to run Matlab to capture the BCI data and save the raw data reliably. While we planned to work with streaming this data live, building a reliable way to do that proved to be too difficult given the time restrictions of the hackathon. Wolfgang and Simon worked to analyze the raw capture data. The raw data consisted of 8 or 16 sets of voltages captured directly from the BCI interface sensors on the user’s head.

We then had to figure out what to do with this raw data. g.Tech provided tools within Matlab to better analyze this data in two main methods: one, by separating alpha, beta, and gamma brain waves, and another, by using their toolkit to extract intentions using their Intendix system that takes the P300 concentration signals and then correlates them using a system of icons that flash on a screen, allowing the user’s focus to select items in a grid. While this approach is valuable for finding a user’s intentions, it only worked when they were focusing on the screen and awake. A major part of this project is to work with users who are sleeping, which led us to work closer with the raw data and separate the alpha, beta, and gamma brain waves.

Matlab Simulink Model with Bandpass analysis from BCI interface

One approach to analyze the raw data we took was to train a motor imagery classifier. This classifier picked up signals from the motor cortex (signals of intention to move muscles) and differentiated movements from the left and the right side of the body. The classifier was built using Linear Discriminant Analysis —a method to classify a linear combination of data that characterizes classes of objects or events — in this case motor cortex movement activity. Since the motor cortex is active during sleep in some way, we can use this data to visualize what movement intentions are occurring during a dream or sleep and plot those into a space. However, this data analysis was not able to be completed with enough detail by the end of the exercise and the visualization relied on the band analysis brain frequency signals we picked up.

Kha Collecting Sleep Data

Once we were able to figure out how to use Matlab effectively to capture data, we decided to focus on manipulating the data as saved captures instead of in real time. It was 1 in the morning at this point, and we needed to start collecting data. Kha volunteered to sleep at the hackathon around 3am and collect data for a few hours.

Raw Data

Now that we had data to work with, we talked about different visualization methods. Our mentor, Alex Guevara, built a visualization called Corteza in a creating coding environemtn called TouchDesigner. Corteza uses a particle system paired with streaming data from g.Tech BCI sensors. Alex’s help with both interfacing the boards and help with writing the graphics generation code and making those run at a half-reasonable speed were invaluable during the course of the hackathon.

Inspiration: Alex Guevara ~ Corteza

Alex used TouchDesigner for the Coteza visualization and the program lends itself well to prototyping visual applications with Python. TouchDesigner allows developers to build a block diagram using not only built in transformations but also custom code to build live programs. We decided to use TouchDesigner given its flexibility and visual design to build out our visualization of this data. However, working with the Matlab data coming from the g.Tech libraries proved difficult — we had issues with providing multiple processed variables through a live feed into TouchDesigner and thus decided to work with Matlab files that can be easily imported and replayed to build a visualization on demand.

I worked with Martin to export the data from Matlab into TouchDesigner using a custom python bridge that replayed the processed data capture from Matlab to an Open Sound Control (OSC) protocol. We chose to use OSC because it allowed for real-time streaming of complex sets of data easily into touch designer, allowing us to use those variables to influence the progression of the visualization. After some iteration and brainstorming, we decided to build a simpler star field animation that would add particles and increase a visual glow intensity given alpha wave brain activity.

TouchDesigner: OSC input to GLSL Shaders and Video Compositing demo

A OpenGL Shader is a specific programming environment that allows for fast manipulations of graphics in a relatively simple way. A developer writes parallel programs taking in a few variables to publish each pixel of the screen independently, allowing the Graphics Processing Unit (GPU) to update the screen with complex visualizations in real-time, much faster than processing the visualization pixel by pixel. We decided to use the standard OpenGL Shader Language (GLSL) to build the visualization because it is relatively easy to write, supported across many platforms, and allows for smooth visualizations. The team searched for interesting existing GLSL shaders using Shadertoy and various other websites for inspiration on building these GLSL visualization programs to run within TouchDesigner. Afterwards, Alex helped up port a few of these shaders to TouchDesigner and then we rewrote four or five different visualizations into the correct format taking in variables from the brain sensor data stream. At this point it was 5 am, and writing shaders from scratch didn’t make much sense given the time and experience limitations we had with working in GLSL. TouchDesigner’s visual data feed allowed for compositing video feeds from multiple shaders: one shader rendered pixelated light tunnel with a variable hue set by the normalized alpha brain waves data, the other shader was star particle field with a blue glow — the brightness of the glow was influenced by the beta brain waves.

TouchDesigner: Visualization in Development

Around 8am, the OSC data feed was being run off of analyzed sample data and while half the team was asleep the visualization in TouchDesigner was being finished. Around 10am, we finally were able to process the raw data captured from Kha during his 3 am sleep and turn it into a real process data capture to feed the live visualization over OSC. The visualization rendering and live data feed from the Matlab data captures was then captured in a screen recording for the presentation with representative data.

Our Final Visualization Demo (view video)

These sleepless hours were worth it in the end where we were able to present our final results while half-asleep to the judges and ended up winning the second-place prize in the coding competition. Many thanks to the entire team and Alex for staying up so late and persisting making this project come together. Additionally, G.Tech’s sponsorship and mentorship using their hardware during the event and for Ars Electronica for the space and also for the volunteers that worked incredibly hard for the 24 hours to run smoothly all during a larger event.