Thought as an interface

Richard Burton
7 min readAug 25, 2015

--

A couple of weeks ago I attended the Consciousness Hacking Meetup in San Francisco. After the presentations, I had the pleasure of meeting a guy called William Duhe who is working on a more efficient way to read the electrical signals in the brain.

Trying on a headset by http://choosemuse.com/. Chatting to William about his work. Seeing a bunch of brainwave data represented as images.

The technical term for this apparatus is a non-invasive Brain Computer Interface that reads the electroencephalography data. The shorthand for this is EEG — pronounced “electro-en-sef-a-logrophy”. William showed me the system he’d built and the algorithms he was using to read EEG data from a Muse headset which had contact points on my forehead and behind my ears. The software asked me to blink 5 times at specific moments. The electrical signals in my brain were represented on the screen as a multicoloured image — rather like an infra-red heat map — and then this image was fed to a computer vision algorithm. Once the computer had a rough idea of what my blink looks like, it knew whenever I was blinking. With a little more training, it would be able to recognise when I was looking in specific directions.

Photo credit: My friend Layla Myers took this photo at an installation that’s part of the Venice Biennale at a church called San Giorgio Maggiore.

I’ve read about the concept of a mind mouse before and always thought it sounded interesting but unimportant. However, when I actually experienced a computer reacting to my thoughts, it was really exciting. It felt more direct, more obvious. It felt like I was getting a glimpse of what it’ll be like to interact with computers in the future. Today, we tell computers what we are thinking through our hands and our vocal chords. After using this prototype, I cannot stop thinking about what it’ll be like to use my thoughts to control a computer.

Why is this an interesting time for this technology?

As far as I can tell, what’s new is that the devices that read the brain activity are getting smaller and the algorithms that look for patterns are getting smarter. The trend is promising: at some point developers will be able to build software on top of APIs for thought.

When I’ve been daydreaming about these concepts, it’s reminded me of a wonderful passage in Foundation’s Edge. The lead character, Trevize, places his hands on the ship’s highly advanced computer and this is a description of how it felt:

As he and the computer held hands, their thinking merged and it no longer mattered whether his eyes were open or closed. Opening them did not improve his vision nor did closing them dim it.

Either way, he saw the room with complete clarity — not just in the direction in which he was looking, but all around and above and below.

He saw every room in the spaceship and he saw outside as well. The sun had risen and its brightness was dimmed in the morning mist, but he could look at it directly without being dazzled, for the computer automatically filtered the light waves.

He felt the gentle wind and its temperature, and the sounds of the world about him. He detected the planet’s magnetic field and the tiny electrical charges on the wall of the ship.

He became aware of the controls of the ship, without even knowing what they were in detail. He knew only that if he wanted to lift the ship, or turn it, or accelerate it, or make use of any of its abilities, the process was the same as that of performing the analogous process to his body. He had but to use his will. …

He found — as he cast the net of his computer — enhanced consciousness outward — that he could sense the condition of the upper atmosphere; that he could see the weather patterns; that he could detect the other ships that were swarming upward and the others that were settling downward. All of this had to be taken into account and the computer was taking it into account. If the computer had not been doing so, Trevize realized, he need only desire the computer to do so — and it would be done.

So much for the volumes of programming; there were none.

This passage seems like the ideal state for an interface between a human and a computer. A future vision written by Isaac Asimov in 1982 (!) and a guide of of what to work towards.

Think Siri

What I want to put forward here are a few ideas that seem within grasp given the current state of the technology. I’m imaging that thoughts would augment our current input methods, not replace them entirely. Siri works with iPhone and Apple Watch through voice, it doesn’t replace touch. It also feels like the perfect place to add this functionality: Think Siri.

Don’t be a Glasshole

In order for this stuff to take off, the hardware that reads our brainwaves has to be socially acceptable. Right now it is usually measured with large, scary-looking apparatus that has lots of contacts points with the skin.

This kind of hardware suffers from the Glasshole problem. No one is going to wear this all the time.

As the hardware gets smaller and the algorithms for picking out unique thoughts improve, it’ll be possible to embed these sensors into hardware that’s already socially acceptable. Tiny brainwave monitors could be built in to headphones, large earrings, or sleeker handsfree-style kits that rest just behind the ear.

Siri Think is my working name for this concept. It could start out in the highend headphones on the Beats range.

What are you thinking?

A critical part of the process would be teaching the system what your thoughts look like. What signals does your brain emit when you blink, when you look up or when you think “Safari”?

ThoughtID

In time, the algorithms will be advanced enough to recognise our thought patterns in the same way that TouchID authenticates us based on our fingerprint.

Pause for thought

Thoughts sensors in EarPods would allow people to pause their music by thinking.

Ouch!

Way too loud!

Whenever I plug my headphones in and have the volume set too high, my brainwaves must be off the charts with stress and pain. Catching moments like that — or better yet, predicting them — would really improve the experience. My computer will look after me 😍

I’m hungry

As the software gets familiar with your thought patterns, it should start to recognise what you want from each app. My phone will know when I’m hungry and pre-load Sprig as I pick it up.

Window management

Thoughtcuts? (I’ll get my coat on the way out.)

I’m a keyboard shortcut junky for window management. I’d love to think “full screen” to maximise. Perhaps to start with I would think “window” and then be presented with some options or what to do. At this stage I could look left or right to pick maximise or minimise.

Thought rewind

QuickTime for your thoughts.

Sometimes everything just flows. You’re focussed, productive, and in the zone. Other times your mind is all over the place. When we think about improving ourselves we try to think back and reason about why we did well or poorly. I’d love to cycle back through my thoughts for the day and use software to better understand myself. The quantified self movement uses lots of proxies for this data. I have experimented with habit tracking, RescueTime and Strava. I feed all that data into my Gyroscope profile to analyse it. Tools like this will help me understand why I’m happy or sad, focussed or distracted, calm or angry, and help me improve myself.

Lots to think about

Some thoughts on how this stuff might work. Talking into Siri in public places still feels weird to me. Sharing my thoughts with Siri privately is much more appealing.

I’ve only been thinking about the possibilities of this interface for a couple of weeks. I put these mockups together early this morning. My sketchbook is filling up with ideas for how we could design for thought.

--

--