Mind Reading with Brain Computer Interfaces

Lola
DataDrivenInvestor

--

Since my first post last month about General Adversarial Networks, my interest in neural networks has persisted. In that post I discussed how AI can be used to generate original images. In this post, I’ll be exploring another fascinating way in which neural networks can be used — to extract information and facilitate communication between brains and computers, and other brains.

You may have noticed that ‘mind reading AI’ has been a hot topic in the media this year, from news on Facebook exploring technology to write posts with your thoughts, to Elon Musk recently announcing his latest venture with Neuralink, a ‘neural lace’ to be implanted under the scalp to enable one to interact with machines.

Despite the recent news surrounding it lately, ‘mind-reading’ technology has actually been around for decades. There are two commonly used methods for the purpose of reading brain patterns — electroencephalography (EEG) and functional magnetic resonance imaging (fMRI). However, until the last couple of decades, it’s been quite tricky to make use of the data produced by these methods. Thanks to the advancement of artificial intelligence, it’s becoming far easier to decipher the data. The combination of these methods forms what is known as a brain-computer interface (BCI).

fMRI is the more recent invention of the two. It indirectly tracks neuronal activity by tracking blood flow throughout the brain. When neuronal activity in a particular area of the brain increases, there is an increased demand for oxygen in that area and as a result, blood flow to the area increases to deliver the oxygen. This increased blood flow is picked up by the fMRI. This shows us what area of the brain was working at a particular moment, which can provide some clues as to what is going on in the brain, however it’s not easy to figure out what exactly the person was thinking at that particular moment.

Enter neural networks. As discussed in my previous post, neural networks are systems designed to mimic human neuronal activity and progressively improve at a task through algorithms and training sets. Neural networks can be used to decipher patterns in fMRI scans and associate active areas of the brain activity with particular thoughts or activities. In order to do this, it needs to be trained with tens or hundreds of thousands of fMRI scans, labeled with what activity was occurring at the time. For example, feeding it with several images of a fMRI scans whilst looking at a particular image. Eventually, it will be able to recognize by itself the brain activity taking place whilst looking at that image.

fMRI brain scan — side view

There have been some great studies demonstrating the use of neural networks and fMRIs. Zhongming Liu and his team at Purdue University trained a neural network to choose from a selection of 15 options (including a face, bird or plane) of what a person was looking at. In order for it to be able to do this, they trained the network with several fMRI scans of human brain activity whilst looking at each of those things. The success rate was about 50%, which is fairly impressive.

fMRI based BCIs have also been used to recreate videos based on what a person was watching at the time of the brain scan. Jack Gallant and his team at UC Berkeley set this up by recording brain activity while a subject watched hours worth of movie trailers. The brain activity was mapped with the content of each video frame that it corresponded with, which was then used to train the neural network. Through this, the neural network learned to predict the image someone was watching based on the recorded brain activity. Below is a video showing the most accurate results it produced. If you’ve read my previous post on generative adversarial networks, you will find that this sounds quite similar. It is in fact a different type of generative model in this case, but generative adversarial networks have indeed been used in other cases such as this study, where the system was used to generate images and videos based on fMRI input.

Movie reconstruction from human brain activity. Jack Gallant et al.

Similar work was also performed by Yukiyasu Kaminani, published in an article earlier this year, using a neural network to reconstruct images of letters a person was looking at, based on fMRI activity (image below). Kaminani has also worked on identifying objects in dreams using the same technique.

Alphabetical letter reconstructions. Top row show the images present to the participants. The other rows show the images generated by the neural network based on fMRI activity.

Despite the amazing work that has been produced using fMRI, there are limitations. It is incredibly expensive. The method itself (lying still in a large machine) is totally impractical for everyday use. It’s also quite difficult to obtain the amount of data required to train the neural networks.

As an alternative, an electroencephalogram (EEG) can be used to read brain patterns instead. Whereas fMRI records images of brain activity based on blood flow in particular areas of the brain, EEGs record the actual brain signals. A cap (similar to a swimming cap, with chin straps) covered in conductive electrodes is placed on the scalp, which record the electrical currents that occur when neurons communicate.

an EEG cap

Tracking these occurrences appears as waves on a graph. There are 5 different brainwaves typically used in EEGs (though there are many, many more), and they are each associated with different types of brain activity.

The 5 types of brainwaves. Source — http://neurofeedbackalliance.org/understanding-brain-waves/

Electroencephalography has been around for almost a century, but combining its use with a neural network has massively expanded its application. There are a couple of drawbacks with EEG that can make it difficult to read patterns. Firstly, as bone is a poor conductor of electricity, the skull interferes with the reading. It’s also quite difficult to track specific brain activity due to unwanted extra noise from other activities. One way to get around this is by getting the electrodes closer to the brain by surgically implanting it under the skull, which is how EEGs were originally designed by its creator Hans Berger, and is what Elon Musk is working on with his new venture, Neuralink, in the form of neural lace — an ultra thin mesh, implanted directly onto the brain. As bizarre as this sounds, it has already been performed successfully on mice. Of course, this most likely won’t be an option in the near future for most consumers, but the less invasive cap option, coupled with a good neural network can extract the useful information and is far more feasible for consumer use and has produced some amazing applications. EEG based BCIs have been used to control wheelchairs, neuroprosthetics, drones and even humanoid robots.

More recently, BCIs are increasingly being researched for use in gaming and social media. Last year, Mark Zuckerberg wrote in a Facebook post:

‘We’re working on a system that will let you type straight from your brain about 5x faster than you can type on your phone today. Eventually, we want to turn it into a wearable technology that can be manufactured at scale. Even a simple yes/no ‘brain click’ would help make things like augmented reality feel much more natural.’

As if that wasn’t bizarre enough, BCIs are also being used to facilitate communication between human brains. Rao and his team at University of Washington created BrainNet, the first ever multi person non-direct brain to brain communication interface. This interface allowed three people to communicate via BCI to play a game of Tetris. Two of them were assigned the role of ‘senders’, delivering instructions to a third person, a receiver, of whether or not he should rotate a Tetris block.

The only way the receiver could play the game successfully as by receiving information from he senders, as they could see the Tetris pile at the bottom, and the receiver could not.

The two senders wear EEG caps to transmit their signals. The brain signal is created by them looking at a flashing LED that says yes (do rotate), or another one that says no (i.e, do not rotate). Looking at either of these signs produce a signal, which is sent to and recognised by the neural network via the internet.

Brain-to-brain communication via BCIs andCBIs

The neural network converts the raw EEG data to a single command, which is then sent over to the visual cortex of the receiver using transcranial magnetic stimulation (TMS21). If this works correctly, the sender’s ‘yes’ would appear as a phosphene in the user’s vision (the strange patterns that appear when you rub your eyes) to the receiver, and a ‘no’ would not produce sufficient energy to trigger the phosphene. The receiver is also equipped with an EEG to perform the task of rotating (or not rotating) the Tetris block by looking at an LED yes or no similar to the senders’, which is then taken up again by the neural network and actioned on. They managed to complete the game with about 70% accuracy, which is quite amazing.

As you can see, reading thoughts is no longer a science fiction fantasy. It is quickly becoming a reality. Though the technology is still in relatively early stages, it can only progress further and the possibilities are really exciting to think about. Being able to store your own thoughts on a hard drive and put them back in when needed. Revisiting your dreams and sharing them with others. Perhaps we could even communicate with animals one day! Of course, as with most technology, there are a lot of potential negative implications which I’m sure you can think of, but regardless, I’m quite excited to see where this goes. It’s also pretty cool to think of how we’ve essentially created a system to the mimic human brain, and then use it to decipher the human brain.

--

--