Using blinks to speak

Daniel G
6 min readJul 28, 2020

--

Using the Muse headband to type words with morse code patterns!

Throughout this year, I have been very interested in the field of Brain-Computer Interfaces (BCI). BCI’s use the brain to create and send responses to a computer or a machine. The computer or machine can then harness and interpret these responses. This is an interesting technology that has many applications. If you want an overview of BCI technology click here.

For a while now, I have been playing around with this technology. A couple of months ago I created a Brain Controlled Brick Game where the user utilizes a specific eye to move the paddle(for more information click here). I decided to take this technology one step further by creating a communication system for people with Locked-in Syndrome.

Locked-In syndrome

Locked-in syndrome is a neurological disorder that causes complete paralysis of most of the muscles in the body except for the eyes. Locked-in syndrome is caused by damage to the pons which is a part of the brainstem containing nerve fibers. This part of the brain is important as it relays information to other parts of the brain.

https://img.grepmed.com/uploads/4985/braindeath-syndrome-vegetative-lockedin-comparison-original.png

Current methods of communication with the Muse headband are very slow or rely on the user to blink many times in order to get to the letter they want. This can be quick when choosing the letter A, however, getting to Z is a bit more time consuming as you may have to blink 26 times. In order to shorten this time, I decided to build a program that allows the user to communicate using a morse code pattern.

Morse Code

Morse Code is a method of communication that is used to encode regular text. With morse code, there are two different signal durations. One is called a dot(tap)and the other is a dash(hold). For instance, in morse code the letter Z is represented by two dashes and then followed by two dots.

In my program whenever the user has does a soft blink, that would represent a dot and a dash with a harder blink. This saves a lot of time so if the user wants to print Z all that is needed is two hard eye blinks and two soft eye blinks.

Hardware

For this project, I used the Muse Model 2, (an EEG headset) as the Muse Model 1 cannot connect with Web Bluetooth.

In order to establish a Bluetooth connection between my device and my computer, I used Silicon labs Bled112-V1 Bluetooth USB dongle. There are other ways to connect to the Muse via Bluetooth such as using BlueMuse, however this route required less configuration and is simple to use.

https://www.mouser.com/images/bluegiga/lrg/inhouse_bled112_t.jpg

Program

First, I copied Raphael Yamamoto's repository from GitHub. In the original program, the user would have to click for a dot(.) and hold the keypad for a dash(-). I changed this in the code so that when the user hits the right key it prints a dot and the left key would be a dash.

Originally in order to detect blinks and output a function, I would have used the same brain.js code that was inserted from my last program. However, I realized that I could get more specific values when using a python program called BCI- Workshops. The original program from BCI-Workshops allowed the user to stream the EEG data from the MuseHeadband. The user could see the EEG waves and see the specific values for each wave type.

EEG waves

In the image above, the EEG waves spiked showing that the user has either blinked or move their eyes.

However, in order to actually detect the blinks, we need to look at the specific EEG values.

This list of numbers is being outputted from the program every second. In order to actually detect if I blinked or not, some trial and error needed to be done to see what numbers were outputted when I blinked.

Delta output goes fro 0.7 to 1.3 indicating a spike in EEG data

I could see that when I would blink, each value would increase. For instance, the delta value would go from 0.7 to 1.3. After pinpointing each blink value, I added a new component to the program which would print “blink” onto the console every time it reached these certain values.

if 1.3 > band_powers[Band.Delta] > 1 and 1.2 > band_powers[Band.Theta] > 1 and band_powers[Band.Alpha] < 1 and band_powers[Band.Beta] < 0.15: print("""Blink""")

Now that the blink is detected, a press command was added to the program so that every time I blink, the program would simulate a left key press.

pyautogui.press('Left')

However, there still needs to be an indicator of whether it was a hard blink or a light blink so that there can be two different outputs. In order to do this, two sets of ranges needed to be set for both light and hard blinks. For instance, every time I would do a light blink, the delta value was between 1 and 1.4. Every time I would do a harder blink the delta value was greater than 1.4. After some more trial and error, the beta, delta, and alpha values were set to there specific parameters.

if band_powers[Band.Delta] > 1.4 and  band_powers[Band.Theta] > 1.3 and band_powers[Band.Alpha] > 1.15 or band_powers[Band.Beta] > 0.2:print("""Hard blink""")pyautogui.press('left')
if 1.4 > band_powers[Band.Delta] > 1 and 1.3 > band_powers[Band.Theta] > 1 and band_powers[Band.Alpha] < 1 and band_powers[Band.Beta] < 0.15: #and band_powers[Band.Beta] < .75print("""Light blink""")pyautogui.press('right')

In the hard blink code, an “or” statement was added. This is because I saw in the EEG values that hard blinks would create a beta value of 0.2 or more.

In this image, Beta > 0.2
In this image, Delta > 1, Theta > 1, Alpha < 1 and beta < 7.5

I also wanted the program to speak the words that the user was printing onto the screen. The first line is responsible for printing the letter to the screen. In the second line, Responsive Voice is a program that automatically converts the text I gave it to sound. In the brackets, it is told to speak the translated words from the first line.

document.querySelector("#translated").innerHTML = translated;responsiveVoice.speak(translated);

In order to call the Responsive Voice program to work for our program, the following line was added to the Html page.

<script type=”text/javascript” src=”https://code.responsivevoice.org/responsivevoice.js"></script>

What I noted above were the major additions to the program. However, there is still a lot of other code that was added to the program to allow it to run and output the correct response. If you want to check out more of the technical side of the program, visit my Github repository below for the rest of the code.

The final outcome of the project allows the user to easily print words onto the screen. Once the words are written, the individual words will be read out loud.

Demo:

In the video below I print the word Tea using the Morse Code Pattern.

In the video below, I print a more complicated word (Time) using the Morse Code pattern.

The program is very effective and can produce words very quickly.

This was an interesting project for me as I learned more about EEG waves and got to create something that can help society. There is a lot that can be done in the field of Brain-Computer Interfaces and I really excited to see how it will further impact people’s lives in the future.

--

--