The Process Behind DreamFREQ: A Synthesizer Powered by your Brain

Amanda Galemmo
Noctvrnal
Published in
4 min readApr 6, 2020

Have you ever wondered what your brain sounds like as you’re falling asleep? For our residency with On Air Fest, we were invited to stay at the Wythe Hotel in Brooklyn, NY for a week as we created an auditory soundscape based on the prompt “last sounds of the day.” What resulted was DreamFREQ, a program that reads a live feed of a user’s brainwaves and translates the electric activity into audio. Using this technology, participants’ delta brainwaves were synthesized to create a musical composition. We intended the piece to represent our personal spheres of influence, incorporating the sounds of our environments, conversations, and other snippets of daily life. The result is a 15 minute composition of three different movements woven together to mimic our daily cycle of being awake, drifting off to sleep, resting and waking up again.

During On Air Fest, we were able to create a live demonstration that allowed us to display the process behind recording and synthesizing our subjects’ brainwaves. Volunteers were outfitted with a Muse EEG headband, a piece of wearable tech that is designed to interpret the electric activity sent out by one’s brain. The brainwave data of the user is collected and fed live using Bluetooth through the Mind Monitor app. The brain outputs multiple types of brainwaves interpreted by Mind Monitor as various datastreams. For our purposes, we were interested solely in the delta waves, as they are the brainwaves associated with creativity and the first stage of falling asleep. We singled out the Absolute Delta Wave for our synthesizer while the other four brainwave types were represented visually for display purposes.

Since all areas of the brain host activity at any moment of time, we isolated the delta waves by utilizing Power Spectral Density (PSD). PSD is able to measure the strength and origin of the delta wave, allowing us to isolate the brainwave type and see fluctuations of the reading in units of decibels (dB). This delta wave data was sent over Open Sound Control (OSC) to a computer running a synthesizer we built in Touch Designer. Using the Audio Oscillator, we generated a basic sine wave audio signal. We then used filters to modulate the sound simply. The pitch of the audio was determined by mapping the delta wave reading to the Pitch parameter on the synthesizer filter, so that as the delta waves changed the relative pitch changed along with it.

The live readings through Mind Monitor were displayed opposite of the user, so that they could visually see the brainwave moving in the app and hear the change in the sound’s pitch, creating that direct connection: “my brain is moving this line and changing this sound.” We wanted to keep the live synthesizer simple to encourage an open dialogue around the technology and its possibilities that would be easy to connect to and understand. We were able to get a few dozen people into the headset over the afternoon of our demonstration. Seeing people’s reactions to observing a representation of their brain activity was powerful and invigorating, and we had a lot of different conversations with people about the future of the project and ways it could be reinvented and improved upon.

Moving forward, we’d love to expand the project by mapping all five brain readings to different filters. For example, delta waves would continue to be mapped to pitch, while theta waves would dictate the volume, and alpha waves could be modulated through a low frequency oscillator to produce a rhythmic bassline. Each person’s unique combination of brainwaves and patterns would create a sound signature that exists just for that person. Working towards that, we are experimenting with connecting the Touch Designer synth to Ableton Live to utilize a physical modelling synthesizer to give the project a less 8-bit oriented and more musical sound. Visually, we’re looking into streaming OSC data to MadMapper, a projection mapping software, to create a more interesting live reading of the brainwaves.

But before we create DreamFREQ 2.0, you can view demonstrations of its current iteration embedded above or through this link. Our composition that was streamed throughout the Wythe Hotel will be released through On Air Fest’s Last Sounds of the Day podcast and linked here when available.

And finally, a heartfelt thank you to On Air Fest for having us as residents. Through this festival we were able to connect with an eclectic mix of podcasters, audio enthusiasts, and anyone in between, opening our work to a diverse audience and each individual’s interpretation of the project. Being able to hear everyone’s take on the project and their ideas for how it can be utilized differently was invaluable, and the feedback and ideas we received that weekend will help us imagine the next iteration of DreamFREQ.

--

--