A Computer Can Generate Images by Predicting Your Thoughts

Researchers have developed computer models that try to predict the image you’re thinking about by monitoring signals from your brain.

Sritan Motati
TechTalkers
4 min readNov 5, 2020

--

Graphic of how the computer model works (Picture Credit: University of Helsinki)

When you look at other people, do you ever wonder, “What are they thinking?” Humans cannot see what someone else is thinking, and for people who can’t speak and don’t have an easy way to visualize their thoughts for others, this is a major hindrance. Luckily, computer modeling techniques and brain-computer interfaces have gotten more powerful and have opened up countless possibilities in the field of neuroscience.

Researchers at the University of Helsinki, located in Finland, have developed a generative adversarial network that uses electroencephalogram (EEG) signals from a person’s brain to predict what they’re thinking about. That may just be a bunch of weird words to you, but keep reading to find out how this network works and why you should be excited about the future of this technology.

How It Works

Generative adversarial networks (GANs) are structures that use deep learning techniques like artificial neural networks, which are a series of algorithms that find relationships in data, for generative modeling purposes. Such purposes include generating photorealistic images, 3D object generation, and even face aging, which you can see in many apps.

Diagram of EasyCAP, which was used to record EEG signals in the study (Picture Credit: Brain Support)

In this study, the researchers used a pre-trained GAN to generate face images (not from EEG signals). These images were then sorted into 8 different categories based on what characteristics were visible in the image (ex. blonde hair).

To evaluate the GAN that generated images using EEG signals (signals that reflect brain activity), they showed 31 volunteers images from the 8 categories of images and instructed them to concentrate on specific stimuli visible in the picture. While the participants did this, the researchers fed EEG signals from their brains into the GAN. The GAN checked to see if it could detect what the subject was looking for and, using this data, modified itself. Finally, the model generated images using a subject’s EEG scans. These images were evaluated by the participants in the study, and they nearly perfectly matched the features the participants were thinking about. The model received an 83% accuracy!

Generated images for 16 participants and all 8 features (Picture Credit: Nature)

The process of using data from the brain for generation purposes is referred to as neuroadaptive generative modeling. Tuukka Ruotsalo, an Associate Professor at the University of Copenhagen, Denmark, says:

“The technique combines natural human responses with the computer’s ability to create new information. In the experiment, the participants were only asked to look at the computer-generated images. The computer, in turn, modeled the images displayed and the human reaction toward the images by using human brain responses. From this, the computer can create an entirely new image that matches the user’s intention.”

Overview of neuroadaptive generative modeling (Picture Credit: Nature)

Possible Uses

Although this study only focused on generating images of human faces, the potential for neuroadaptive generative modeling is sky-high. One possible use of this technique is augmenting human creativity. This could include giving people the ability to draw by simply focusing on where they want to draw something. Ruotsalo says,

“If you want to draw or illustrate something but are unable to do so, the computer may help you to achieve your goal. It could just observe the focus of attention and predict what you would like to create.”

Diagram of GAN (Picture Credit: Mark Farragher (Medium))

Additionally, this technology can be used to learn more about human perception and how we perceive things in our brains. The study shows that neuroadaptive generative modeling can be used to figure out associations between features (in an image) and EEG signal patterns, so in theory, it could be used to show how we perceive certain traits. Additionally, we could find variations in brain activity and perception between different people.

Artificial intelligence and machine learning took lots of inspiration from the human brain, and now, they’re helping us understand the human brain! GANs are getting more popular every day, as more people are learning how to program them using libraries like TensorFlow. Hopefully, the knowledge we gain from studying EEG patterns with deep learning techniques can help someone in the future, whether it’s someone with a disability or a neurodegenerative disease, and I can’t wait for the future of the intersection of neuroscience and AI.

--

--

Sritan Motati
TechTalkers

Founder of TechTalkers. Medicine and artificial intelligence enthusiast. https://medium.com/techtalkers