Unlocking the Mind: Safeguarding Mental Privacy in a Technological Era

Jessica Nedry
Engineering WRIT340
11 min readMar 10, 2024

Jessica Nedry

The University of Southern California

jessicanedry@gmail.com

Abstract:

This article delves into the implications of emerging mind-reading technologies on cognitive privacy. It challenges the widely held assumption of mental invulnerability and investigates the inherent cautions that should be exercised as technologies capable of deciphering thoughts are employed. From brain-pattern analysis to semantic decoders, the piece explores recent advancements, providing insights into the exponential progress that has been made in decoding conscious and subconscious mental processes. Highlighting the convergence of artificial intelligence and neuroimaging, the paper raises concerns about the ability to bypass traditional consent processes when it comes to these technologies being deployed on individuals. Drawing inspiration from cautionary tales in popular media, the piece urges heightened public awareness regarding the topic. It advocates for the need to establish a balance between innovation and privacy while equipping readers with the knowledge necessary to navigate the intricate landscape of mental privacy in a world dominated by advancing technologies.

Jessica Nedry is a senior studying neuroscience at the University of Southern California. Her interests lie in understanding the complexities of the mind, specifically exploring the intersection of neuroscience and technology.

Keywords: mind-reading, mental privacy, neurotechnology, artificial intelligence, neuroscience

The human brain is a fascinating thing. It is the organ that enables us to think and embody conscious existence. When functioning properly, it grants us the ability to engage with reality and navigate the world. It is the cornerstone upon which we build our very existence, defining our identity and shaping who we are. Within its confines lies our thoughts, feelings, memories, secrets, and dreams — uniquely ours, shielded from external manipulation or intrusion. These are things that we consider safe within the fortress of our minds, things that we will never have to share if we choose not to. Technologies that could expose these innermost aspects only exist in Hollywood, in unrealistic sci-fi movies and books that are in no way representative of the real world. But what if this notion of privacy is not as impenetrable as we believe?

In our world today, concerns about privacy span across various domains, from data security to identity protection to financial confidentiality. Yet, there is a critical oversight that needs to be acknowledged: mental privacy. This article seeks to challenge the perceived untouchability of our private cognitive space and to investigate the implications of current and future “mind-reading” technologies on cognitive confidentiality. As we navigate through brain-pattern analysis, semantic decoders, brain pacemakers, and neural networks, we must confront the limitations and ethical considerations inherent in mind-reading technologies in order to obtain a comprehensive understanding of what it means to have a balance between innovation and privacy. More importantly, it is necessary to understand the practical functioning and applications of these technologies to prepare for the inevitable moment where individuals will attempt to misuse them, and the ease with which consent for these processes could be bypassed. This understanding will encourage the reader to safeguard themselves against potential unwanted intrusions. Ultimately, the goal of this article is to inform individuals so that they can proactively safeguard their cognitive privacy while hopefully promoting the responsible development and deployment of these technologies.

Let’s consider what most of us know about “mind-reading” technologies, predominantly drawn from the domain of popular media. “Inception”, directed by Christopher Nolan, vividly illustrates how the mind can so easily become susceptible to manipulation once the seemingly impassable walls between dreams and reality are breached. In the anthology series “Black Mirror”, the episode “Crocodile” is set in a world where memories can be extracted and viewed. The unsettling narrative follows characters who must deal with the dire consequences of having their deepest thoughts laid bare. The iconic film “The Matrix” takes us into a simulated reality, presenting the chilling concept of manipulating human perceptions and thoughts without the individual’s awareness. These media pieces offer distinct perspectives on the concept of mind-reading and mind-controlling technologies and their potential applications. Nevertheless, their narratives consistently underscore the numerous negative consequences that would result from deploying these technologies on unsuspecting minds. In this context, Hollywood serves as an unmistakable warning, vividly showing us what these technologies could progress to and the repercussions that could occur when they fall into the wrong hands. These cautionary tales serve as a catalyst, encouraging a more thorough examination of the current landscape of real-world mind-reading technologies. They stress the crucial importance of public awareness regarding these technologies so that individuals can protect themselves from scenarios similar to those portrayed in the movies.

To begin our exploration, we will first discuss the remarkable advances that have been made the last few decades in using machines to decipher thoughts. As will become obvious, contemporary technologies are still a far cry from their cinematic counterparts. However, this information will highlight the exponential progress that has been made in research during this timeframe, reinforcing the legitimacy of concerns about these technologies approaching the capabilities depicted in movies. This discussion will also lay out how the technologies work at the moment, providing the reader with sufficient knowledge to understand their existing functionality and how to protect themselves against possible misuse.

One man at the forefront of this technological frontier is Jack Gallant, a neuroscientist at the University of California, Berkeley. Over the past 15 years, Gallant has led a pioneering neuroscience and psychology lab, working to unravel the visual perceptual processes of the mind. In 2011, Gallant and Shinji Nishimoto conducted a revolutionary study. Their method involved showing individuals images and movies while simultaneously conducting functional magnetic resonance imaging (fMRI) scans of their brains — a process which works by showing which areas of the brain are active when engaged in a behavior or cognitive process. Subjects watched two separate sets of Hollywood movie trailers while fMRI was used to measure blood flow through the visual cortex. The recorded brain activity was fed into a computer program that learned, second by second, to associate visual patterns in the movie with corresponding brain activity. Employing brain-pattern analysis and sophisticated computer algorithms, the researchers were able to use the fMRI scans to construct a comprehensive model of the subject’s visual system. With this model, the researchers then had the subjects watch an entirely new movie, and they were able to remarkably reproduce the images seen by the subject (Nishimoto et al., 2011). Essentially, Gallant’s lab had succeeded in extracting images directly from our minds.

To take this a few steps further, in 2023, researchers developed a new artificial intelligence system called the semantic decoder. Utilizing fMRI, this brain-computer interface can not only translate a person’s thoughts into strings of coherent sentences, but can also decode the very essence of the thought itself. When a participant heard the words “I don’t have my driver’s license yet”, the decoder translated this as, “She has not even started to learn to drive yet” (Tang et al., 2023). This machine has the ability to understand the thoughts and intentions of an individual with no direct communication or explicit expression. If this doesn’t qualify as modern day “mind-reading”, I’m unsure what would. The practical applications of this technology hold great promise for understanding the inner thoughts of individuals who cannot communicate verbally, including stroke victims, coma patients, and those with neurodegenerative diseases. However, the results of this particular study were so profound that its authors thought it was necessary to issue a warning regarding “mental privacy”.

Building upon this research, fMRI has not only evolved to understand our minds when we are awake, but also when we are asleep. The idea behind “Inception” largely has to do with taking advantage of the vulnerable state someone is in while they are sleeping, and using that vulnerability to manipulate the subconscious mind.

In a groundbreaking study, Japanese scientists from Kyoto, led by Professor Yukiyasu Kamitani of the ATR Laboratories, were able to reveal the images people visualize during the initial stages of sleep. Employing a unique approach, three participants underwent fMRI scans as they fell asleep, and were instructed to describe dream images upon brief awakenings. Over 200 iterations were conducted for each individual, resulting in a comprehensive database of dream images, which were then categorized into broader visual groups. Employing a novel neural decoding approach, the researchers utilized machine-learning models trained on stimulus-induced brain activity in visual cortical areas while participants were awake. During subsequent sleep tests, the researchers successfully identified the general category of images appearing in participants’ dreams with a 60% accuracy rate (Horikawa et al., 2013). These results showcased the model’s proficiency in classifying, detecting, and identifying dream contents. Overall, this breakthrough suggests that it might be possible to uncover the subjective contents of dreaming through neural measurements.

To give one more example of the power of fMRI-based brain pattern analysis, researchers have found that they can actually leverage this technology to predict human actions before they unfold. A study from 2019 challenges the commonly held belief that we have full control and autonomy over our personal choices. Conducted in the Future Minds Lab at UNSW School of Psychology, the experiment involved participants freely choosing between visual patterns of red and green stripes before consciously imagining them. They were then asked to rate how strongly they felt their visualizations of the patterns were after choosing them. Both of these tasks were completed while brain activity was recorded using fMRI. With the assistance of machine learning, researchers could predict choices and the strength of visualizations 11 seconds before participants were consciously aware (Koenig-Robert & Pearson, 2019).

This study is the first to capture the content of involuntary visual thoughts, offering insights into their potential influence on subsequent “chosen” conscious imagery. These findings challenge our assumptions about the extent of control we believe we have over our personal mental visualizations. This prompts the question: If machines can anticipate our decisions before we consciously make them, would it be possible for these machines to subconsciously alter or control our choices?

While our existing mind-reading technology may be considered basic in comparison to cinematic portrayals, the author of the previously mentioned study’s decision to issue a warning underscores the possibility for this technology to spiral out of control. In the span of a decade, we went from discerning basic images to extracting entire sentences to predicting and revealing the thoughts of the subconscious mind — an extremely rapid evolution.

Presently, this capability is confined to the use of an fMRI scan, a process involving an individual lying on a surface and being inserted into a long tubular machine — something that would be quite difficult to do without the individual’s knowledge. However, fMRI scans can be used for a variety of purposes, such as assessing the aftermath of strokes or other diseases and guiding brain treatments. The current reliance on fMRI scans highlights the need to exercise caution when subjecting oneself to this technology, given the potential implications as mind-reading technologies progress.

Furthermore, at this point in our exploration, it’s evident that much of the progress in “mind-reading” technologies is largely attributable to advancements in AI. As we know, artificial intelligence has already significantly contributed to predictive analysis — companies actively use it for purposes such as forecasting demand, anticipating market shifts, and optimizing advertising strategies. However, predicting human behavior stands as a central goal in AI research. Recently, scientists have even used AI technology to discern an individual’s likelihood of engaging in criminal behavior (Rigano, 2018).

While these innovations hold great potential, it is important to acknowledge the unnerving dimensions of AI being used to decipher our thoughts and predict our actions. Will government agencies begin to target individuals based on their neurobiological patterns? If this technology falls into the wrong hands, the consequences could extend to unwarranted surveillance, false accusations, and complete violations of privacy and individual rights.

Scientists argue that these advancements are acceptable because they claim that they could never be used on an individual without their knowledge or consent. However, “never” is a weighty term. I am sure that a century ago, scientists would have said that it would never be possible to make a machine that could image the brain, much less decode someone’s thoughts in the slightest. As previously noted, the current requirement for these processes involves a human undergoing an fMRI scan, a substantial process that requires much cooperation. But given this common pattern of historical progression, I believe that, aided by AI, there is a high chance that this technology could progress to the point of being applied to an individual without the necessity of an fMRI, making it easier to occur without an individual’s consent. My purpose in bringing up this aspect is to prompt individuals not to rely solely on the assurance that mind-reading technologies will always be overt and obvious.

Finally, to underscore the dangers of bypassing consent, we must consider the plausible reality of “inception” — the ability to implant an idea in someone’s mind without their awareness? Presently, we can introduce basic signals into the brain using a process known as deep brain stimulation (DBS). This is a therapeutic procedure involving the placement of electrodes within specific regions of the brain which are stimulated using a pacemaker-like device implanted under the skin.

The unnerving aspect of this technology lies in its simplicity: with just a few clicks, a physician can adjust the level of stimulation of an electrode in the left hemisphere, prompting a patient’s right foot to involuntarily shake. A subsequent button press swiftly restores the right foot to a stable position. This raises questions about the boundary between “altering brain function” and “mind control”.

One analysis aims to dig deeper into this distinction, defining specific criteria that would qualify a behavior as mind control. Essentially, “mind control must alter the patient’s behavior in an observable way with the subject’s consent and must be enacted for that purpose” (Koivuniemi & Otto, 2014). This particular analysis argues that DBS is not an example of mind control, as every patient undergoing treatment consents to the intended behavioral changes. However, this analysis fails to consider the actions of individuals who may not adhere to ethical guidelines, whether unintentionally or maliciously. In scenarios where a physician decides to act independently or fails to obtain proper patient consent, there is a high risk of exploiting mind control capabilities. Additionally, the analysis overlooks the possibility of unauthorized individuals gaining access to this technology. In a world where corrupt people exist, it is undeniable that some would exploit mind control capabilities if given the chance.

In conclusion, the current trajectory of mind-reading technologies suggests that they could easily become as powerful as their cinematic counterparts. Our advancements have allowed us to develop machines capable of revealing both conscious and subconscious thoughts, even hinting at the prospect of altering them. The existing dependence on fMRI scans and the necessity for consent highlights the need for readers to approach this technology with careful consideration. Nevertheless, the rapid evolution of AI raises concerns about these processes no longer requiring big machinery, making it possible to bypass traditional consent procedures. Overall, this article aims to convey why we should be concerned about mental privacy breaches in the future. It is my hope that this piece not only raises awareness about present and future threats, but also empowers readers to adopt a cautious approach to safeguarding their personal thoughts as AI continues to advance.

Works Cited Page

Horikawa, T., Tamaki, M., Miyawaki, Y., & Kamitani, Y. (2013). Neural decoding of visual imagery during sleep. Science, 340(6132), 639–642. https://doi.org/10.1126/science.1234330

Koenig-Robert, R., & Pearson, J. (2019). Decoding the contents and strength of imagery before volitional engagement. Scientific Reports, 9(1). https://doi.org/10.1038/s41598-019-39813-y

Koivuniemi, A., & Otto, K. (2014). When “altering brain function” becomes “mind control.” Frontiers in Systems Neuroscience, 8. https://doi.org/10.3389/fnsys.2014.00202

Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. L. (2011). Reconstructing visual experiences from brain activity evoked by natural movies. Current biology : CB, 21(19), 1641–1646. https://doi.org/10.1016/j.cub.2011.08.031

Rigano, C. (2018, October 8). Using artificial intelligence to address criminal justice needs. National Institute of Justice. https://nij.ojp.gov/topics/articles/using-artificial-intelligence-address-criminal-justice-needs

Tang, J., LeBel, A., Jain, S., & Huth, A. G. (2023). Semantic reconstruction of continuous language from non-invasive brain recordings. Nature Neuroscience, 26(5), 858–866. https://doi.org/10.1038/s41593-023-01304-9

--

--