Keep your secrets carefully in your mind, Computers can read them!

Redita
IEEE SB KUET
Published in
6 min readJan 9, 2021

What if your imaginations can be seen on a big screen of a computer? Suppose, The brilliant ideas or secret thoughts are being played on a computer monitor in front of a bunch of people. Imagine, how wonderful it would be if you have a photo of your loved ones in your favorite place which is never captured by a camera or think of a restaurant whose designs are made inside your brain without taking a single pen and seeing the designs being developed on a giant screen of a computer. Very recently it has been experimented to read human mind activities into an AI enriched computer with the help of fMRI or EEG (electroencephalography).

What is fMRI?

fMRI is a machine having a powerful magnetic field, radio wave, a computer that traces the oxygen-glucose-rich blood inside the brain and demonstrates the time-varying metabolic changes in the resting phase, activation phase of the parts of the brain.

Do you know, our blood has a magnetic feature? Blood hemoglobin is diamagnetic when it is oxygenated and paramagnetic when it is deoxygenated. The fMRI detects any tiny change of magnetic properties depending on the degree of oxygenation. For instance, when blood rushes to the particular brain part which is designed for controlling that specific job (activation phase), oxygenation increases following neural activation, and fMRI tracks that. A research team of Kyoto University, Japan used fMRI technology to analyze the brain’s response to external stimuli such as viewing real-life images, hearing sounds, etc and they find fMRI as a proxy for neural activity to figure out what a person is seeing. They mapped out visual processing areas to a resolution of 2 millimeters. But instead of painting over painting until getting the perfect image in the computer, the research team develops a deep neural network whose functional activity resembles the brain’s hierarchical processing. Furthermore, the raw data acquired from fMRI were filtered through the deep neural network. The team’s algorithm of optimizing the pixels of the decoded image uses a DNN (deep neural network). In which, information can be extracted from different levels of the brain’s visual system. Besides, a deep generator network (DGN) (an algorithm) is developed for getting advanced reliable results where the details of the picture are much more precise.

Limitations:

The pictures emanated by this AI model have little resemblance to the actual picture Or the system might be constrained to a predetermined set of objects to identify. To mitigate the limitations, EEG comes into the picture.

What is EEG?

Have you ever compared your brain to a signal generator or a complex circuit? Your internal organs are wired with more than a hundred thousand kilometers of nerve coming out from the spinal cord which transmit the electrical impulses throughout the whole body. An EEG noninvasively detects the brain waves or electrical activity of your cerebrum using electrodes around the scalp. Any change in the electrical stimulation will be amplified and appeared as a graph on the computer or printed on paper. Later the EEG scan will be studied by using the neural network.

Researchers from the Russian corporation Neurobotics and the Moscow Institute of Physics and Technology have discovered a way to visualize a person’s brain activity what they observed in real-time with the help of Artificial intelligence and Brain-Computer Interface (BCI). The process is inspired by the actual image mimicking method. In the first phase of the experiment, the neurologists asked subjects to watch 120 YouTube video fragments each of 10 seconds from 5 arbitrary categories such as abstract shapes, waterfalls, human faces, moving mechanisms, and motorsports.

Source: https://techxplore.com/news/2019-10-neural-network-reconstructs-human-thoughts.html

After analyzing the EEG data, the disparity in brain response to each category-listed video was seen.

Illustration. Brain-computer interface.Credit: Anatoly Bobe/Neurobotics, and @tsarcyanide/MIPT Press Office

In the second phase of the experiment, the researchers built 2 neural networks aiming at developing the EEG signal-generated pictures that highly resemble the actual pictures. To do so 3 random categories are selected from the original 5 categories.

Operation algorithm of the brain-computer interface (BCI) system. Credit: Anatoly Bobe/Neurobotics, and @tsarcyanide/MIPT Press Office

Technological Telepathy:

When you are reading this article, you hear the words in your brain. AI can peep into your mind and extract your inner dialogue into speech by tracking brain activity with the help of fMRI or EEG. You may wonder how can this be possible? Take a sneak peek.

An AI system can interpret thoughts into sentences but within a limitation of around 250 words. Joseph Makin at the University of California, San Francisco, and his colleagues led an experiment on four women who were suffering from epilepsy and they used EEG technology. Each woman was requested to read out some sentences at least twice and the largest sentence had 250 unique words and the team monitored their brain activity. The team noticed that each time a person had spoken a similar sentence, the brain activity changed slightly. It was similar but not identical. “Memorizing the brain activity of these sentences wouldn’t help, so the network instead has to learn what’s similar about them so that it can generalize to this final example,” said Makin. So the team had to decode brain activity for each word, rather than the whole sentence. By doing so, the system will be more trustworthy but will have a vocabulary limit of 250 words.

Prosthetic voice:

Brian Pasley at the University of California, Berkeley has said, “If a pianist was watching a piano being played on TV with the sound off, he would still be able to work out what the music sounded like because he knows what key plays what note.” His research team has succeeded in decoding electrical activity in the auditory system of the brain (temporal lobe). The challenge was to decipher the internal verbalization of the brain. Besides finding out the process of converting speech into meaningful information happening in the brain, is tough. The basic concept is, the sensory neurons are activated by sound and they disseminate the information in different areas of the brain where the sounds are extracted and perceived as a language. The information consists of frequency, the rhythm of syllables, and fluctuations of syllables. There is a neurological relationship between the self-generated voice (imagining sound) and hearing sound. By understanding the relationship between the two, what sound you are actually thinking can be synthesized. Using this concept, scientists have invented the prosthetic voice which decodes the brain’s vocal intentions and translates them into natural language, without having moved any facial muscles.

Art and science collaboration:

There goes a proverb, all good science is art, all good art is science. In 1965, Famed physicist Edmond Dewan and musician Alvin Lucier collaborated for making music of mind from alpha brainwave. Lucier was fascinated by natural sound and wanted to produce music without incorporating voice and musical instruments. In the meantime, Edmond had astounded the world by turning a lamp on and off with his mind, with the help of an EEG. When the two met and Edmond proposed to Lucier to make music from his mind, Lucier agreed spontaneously. Edmond selected percussion instruments and designed an EEG based method of capturing the alpha activity of the brain and transmitting them as music. He had performed throughout Europe and the United States with this piece of art and science.

However, the field is in mint condition, there are more inventions that have to be done. Can the human mind directly connect to Artificial Intelligence without having any limitations? Can our dreams be construed? The human mind is a mysterious entity. Let the mystery be revealed by Artificial Intelligence.

--

--