AI is Reading Minds

Is this the beginning of a dystopia?

Ahmet E. Sarac
Predict
4 min readFeb 21, 2024

--

Generated with AI

Reading minds is some magic trick or only possible in sci-fi movies? It’s impossible to put a device on somebody’s head and see what he’s thinking? You may be wrong.

Meta accomplished it with a model called DINOv2. It’s not perfect, not completely new, but nonetheless a big milestone. When I first heard about this, I was thrilled. But after discussing this with my family, the possibilities of misuse and dystopia scenarios started playing in my head. Can we harness this technology for good without jeopardizing privacy and free will?

The science behind it

How does this model actually “read minds”?

In order to train the model, participants were shown pictures, during which brain activity was measured using Magnetoencephalography, or MEG for short. Think of it as a big helmet that registers the magnetic waves generated during brain activity.

An example of a modern magnetoencephalogramm — NIMH

By feeding this data to the AI, researchers could train it to understand how our brains react to visuals. The truly mind-blowing part? The AI then learned to “translate” brain activity back into images, essentially reconstructing what the person was seeing.

Not the first of its kind

Wasn’t there image decoding before? Yeah, this isn’t completely new to the scientific community. Researchers were already able to decode brain activity, measured by fMRI, into images. But this method has a huge limitation.

The problem with fMRI scans is as following: It measures changes in the blood supply of the brain, which increases when the corresponding brain areas are activated. Therefore, a snapshot of the brain is made every two seconds — rather a slideshow than a movie scene.

With MEG, it is possible to take hundreds of snapshots every second, about ten thousand times more than fMRI, to be precise. This may open the doors for real-world use cases, like controlling prosthetics by thought.

Still in its infancy

The technology is not perfect yet. While it may have perfect temporal resolution, it lacks spatial resolution. MEG measures with only a few hundred sensors. This is a big downgrade, considering fMRI was able to discriminate between thousands of voxels (meaning three-dimensional pixels).

The consequence of this are images that lack details. The authors of the paper found “multiple generated images sharing the correct ground-truth category”¹, but low-level details could not be decoded. This questions the potential of this technology.

The results were yielded in a laboratory setting. And what about more realistic scenarios? Like decoding imaginations? It seems like it can’t deliver satisfactory results when the image is not visually perceived. Other than that, distractions are detrimental for performance. From today’s point of view, it’s in really early stages of development.

Possible game changer

How can we take advantage of such a powerful technology in the future? The first use-case is thought to be a means of communication for individuals with brain injuries. Powerful brain-computer-interfaces could be created, which may significantly increase quality of life for disabled people. Think controlling prosthetic limbs with the mind or communicating their thoughts directly.

Teachers and leaders could use it to quickly share their ideas with their colleagues and deliver their points more effectively. It would be like drawing, without being restricted by artistic talent. And speaking of artists, they could express themselves more freely, as they would not be restricted by their talents.

Whatever it is used for, there must be measures taken in order to retain mental privacy. Otherwise, the possibilities of misuse are endless, and I hope we don’t end in a dystopia, where freedom is reduced to less than that of thought.

There’s no doubt, Meta has achieved something amazing. The current state of the art may not provide enough practicality yet, but I think that it’s a big milestone in a really exciting field of study. The possibilities of abuse are not neglectable. This isn’t science fiction anymore; it’s a conversation we need to have today.

REFERENCES

1: Brain decoding: toward real-time reconstruction of visual perception

Further reading:

Enjoyed this article? Share with your friends and follow me for more!

--

--

Ahmet E. Sarac
Predict

Med student, Muslim | I love learning new things and sharing it with the world. | ahmeterensarac.com