How Neuroscience Will Shape the Metaverse

Media technology companies will decrypt the brain’s spatial memory networks with advanced AI to generalize its intelligence

Joshua Sariñana
9 min readJun 20, 2022

In 2005, neuroscientists in a laboratory basement at Stanford transformed their field by effectively digitizing brain network manipulation. The following year, Facebook, a quick bike ride down the road, would open its platform to the public and later wield AI technologies to sway its massive social network. Neuroscientists and computer scientists would soon converge their network-wielding tools to manipulate reality as we once knew it.

Social media companies have leveraged what neuroscientists have learned in penetrating the brain’s networks to fragment our concentration, precipitate addiction, depression, and anxiety, and manipulate social behavior with misinformation as shown through political uprising. Ironically, the “better connectivity” offered by media technologies compromises the very systems that split the brains’ communication mechanisms that give rise to these current issues.

In the near future, spaces like Facebook’s — that is, Meta’s — version of augmented reality, the Metaverse, will create an immersive barrier between the user and the natural world by mimicking and extracting content from the brain’s spatial memory networks. As you’ll see below, a tactic that’s been massively successful when used against the dopamine reward system.

Social networks will be able to exploit the findings made through neuroscientists’ ability to infiltrate memory systems, and develop new forecasting algorithms that go beyond the reward-based methods used today.

If research from DeepMind, which was acquired by Google several years ago, is any indicator, then neuroscience-inspired algorithms that combine spatial memory and dopamine reward prediction may be the future of social media.

With such human cognition and behavior models, AI technologies can become more generalized in their predictive and problem-solving capabilities by using the massive data sets gathered through the Metaverse.

The conversation between AI and neuroscience research is as old as AI itself, and the two fields have worked off each other’s ideas for decades. Neuroscientists know that algorithms are built upon the fact that we need to use the past to better predict the future. To understand how our memory systems will be tapped into, it’s necessary to see how computations of the brain’s reward system were translated into algorithms, which drive social media behavior, starting over 50 years ago.

Algorithms Bridged the Brain–Behavior Divide

Computer scientists and neuroscientists have a long and successful history of modeling learning that focuses on a reward. Media technology companies train their algorithms based on these models to predict their user’s actions.

Think of Pavlov’s dog. Ring a bell and pair it with food; the dog will learn that food comes whenever they hear a bell. They’ll also drool all over the place. Or, if you’re a fan of The Office (the U.S. adaptation), you may remember Dwight holding out his hand after Jim repeatedly gives him a mint as his computer’s operating system chimes while it shuts down. These are examples of reinforcement learning (RL).

RL models, which are based on behavioral neuroscience studies, were first developed in the 1970s and were created, in part, to address the gap in research between brain and behavior. The technology at the time was unable to show how brain network activity resulted in learning. Over time, these models were elaborated upon. By the ’80s, a significant improvement was made to account for real-time learning and predictions (i.e., temporal difference learning), which paved the way for algorithms that AI technologies now take advantage of.

As neuroscience technology evolved and brain activity was further linked to behavior, neuroscientists found neural networks that matched the computational components of these RL models. The system that showed a direct overlap was the dopaminergic reward prediction network.

In the late ’90s, computational neuroscientists discovered that when an unexpected reward is encountered, the dopamine-producing neurons deep within the brain fire nerve impulses to release the molecule, most of which is to a brain region responsible for motor control of the body and habit formation. Remember Dwight reaching his hand out.

However, when the dopamine network is overstimulated, so too is the motor control and habit formation part of the brain (i.e., the basal ganglia). When a cue, like a bell or a computer chime, predicts a reward, the brain learns to release dopamine to the cue rather than to the reward itself. It’s not hard to see how a person’s phone, app icon, or some notification (even being bored) gets you to habitually check your phone because there might be some unexpected comment, heart, or video. In fact, there’s a link between dopamine and social app use.

Media technologies manipulate the brain’s dopaminergic system to get people to habitually respond to cues for engagement. Researchers have quantitatively validated that these RL models are reflected in the behavior of social media users. People strategize their posts to maximize their digital treats, that is, their likes.

RL-based algorithms increase engagement through the uncertainty of social reward. Though the direct effects of social media on mental health are still being researched, neuroscience is certain that when high stress is mixed with uncertainty, there is a greater chance of mental health issues.

Watching people being killed in your News Feed, next to a highly filtered image of impossible beauty standards, witnessing the U.S. Capitol riot and attack, scrolling the barrage of return-to-work memes, the scarily precise ads to purchase toothpaste when you visit your mom out of town, and the collapse of the supply chain due, in part, to U.S. over-consumption and the pandemic — all these things we engage with fuel our dopamine’s uncertainty engine.

Still, these RL models are limited, and not all human behavior and learning can be explained or predicted by rewarding someone. For some time, researchers (including myself) have shown that dopamine is released in response to stress, noticing anything in the environment that is unexpected or stands out, and is important for navigation-based memory (something I’ve also studied).

To more fully understand how we learn and use the past to imagine potential futures, our environmental context, not just trying to get more digitally based approval, must be considered. By studying the brain regions that underlie these processes, computer scientists and neuroscientists can update their algorithms to extend beyond a rewards-based approach that better captures human–media interactions. Research that Google is currently conducting.

The Artificial∞Human Intelligence Decryption Loop

The Metaverse is creating a new context for users to enmesh with. RL models have successfully bridged the brain and behavior for 2D media. However, mimicking brain functions that create contextual space will be pivotal in guiding future AI to match the symbiotic complexity between humans and machines in a 3D space.

Neuroscientists have shown that the brain regions encoding the fundamental components of experiential memory — that is, context and spatial navigation — are essential for quickly updating new information with prior knowledge and generalizing problem solving — critical elements in human intelligence that are missing in AI.

There are many types of memories and different brain regions that support their various forms. As discussed above, the basal ganglia are essential for motor control and habits, called procedural memory.

Another brain region, called the hippocampus, builds our perception of context (e.g., the room you’re sitting in right now, the train you’re riding on, the queue you’re standing in) and is critical for our ability to generate spatial maps — a GPS of the mind if you will. We navigate our contexts to build a mental map as we encounter new experiences. Through these processes, the hippocampus quickly integrates new information into its network and builds future models of the world (i.e., imagination).

Envision walking through a museum. The hippocampus merges the visual-spatial modules of the white walls, open floor space, the social encounters between people, the paintings, and unifies them to create the context you’re imbued by. Your emotions attach to visual cues you travel through the museum. In the future, when you see or hear something that reminds you of this holistic event, you can rapidly incorporate novel data into your hippocampal network.

Neuroscientists have thoroughly analyzed the hippocampus and its sub-networks, quantified the amount of spatial data each nerve impulse processes (~1.8 bits), how these bits build our minds’ GPS system, and map visual cues on the walls or objects within contexts. These nerve impulses are thought to “replay” backward in time when we think about the past (i.e., remember), play forward when we imagine the future, and aid in abstracting general rules about the world.

Google’s DeepMind co-founder has written extensively on the need to combine neuroscience and AI. His company is actively researching the hippocampus, and using algorithms to figure out how it quickly pulls new information into its memory network. They have trained their RL models to simulate virtual navigation, and have combined spatial memory and RL models to show how the hippocampus predicts the future. Their research has also looked at “replay” in generalizing knowledge.

RL-based algorithms are slow to learn and narrow in their problem-solving ability, but they can outcompete humans in particular tasks. However, in developing navigational models from the hippocampus and combining them with RL-based algorithms, AI could become more generalized and be incorporated into the Metaverse software.

Deep Marketing

Findings from DeepMind and other research entities could result in algorithms that alter the course of media-technology prediction and user interaction. Figuring out where a person may want to navigate to, what they may engage with, and how they decide what to purchase is what marketers want to know when they pay Meta to align users’ attention and actions with their ads.

This type of marketing data is precisely the type of information that would be extracted through the interactions between the user and the Metaverse. Interacting with this augmented reality will require algorithms to quickly adapt to different contexts that users enter, rapidly learn about, and generalize behavior to predict what users will do across different environments.

The GPS of your mind will be turned inside-out and siphoned with other biometric data to support digital phenotyping. Digital phenotyping uses sensor information from your smartphone and wearables to capture moment-to-moment behavior to train predictive algorithms. With this biometric data and electronic health records, AI technologies can predict mood and psychiatric states like depression, anxiety, psychosis, and mania.

Already, facial expressions can accurately predict post-traumatic stress disorder (PTSD), depressive symptoms can be pulled from your texting speed, and your brain wave data can be inferred from your smartphone use; all of this information will be sewn into the fabric of your everyday habits.

Although digital phenotyping is an essential advancement within psychiatry to widen care and enhance therapeutic outcomes, the technology can be used beyond its intended use. Corporations will push on emotional pressure points to get you to act the way they want to have you purchase what they are selling. Add in the potential for live facial recognition and eye-tracking from virtual reality headsets and Elon Musk’s vision for his Neuralink system, and things will get weird.

By combining our spatial memory information, RL reward behavior, and extracted biometric data, these new spatial-memory-RL algorithms will all but drill into the user’s mind and go beyond symbiosis. The Metaverse will act more like a parasite that trains its algorithms on user data, altering the path of human perception and imagination.

Who Controls the Present, Controls the Past

Augmented and virtual reality will profoundly change human interaction, culture, and memory as the simulation of one replaces the biology of the other. As our attention moves from “in real life” through our screen and into the digital world, the Metaverse will act as the virtual neurotransmitter that regulates the trillions of connections between neurons. These connections, called synapses, mediate memory recall.

Meta has deployed RL-based algorithms based on the brain’s dopamine prediction network to maximize user engagement and sell ads. Overstimulating this network negatively affects our memory, contributes to mental health issues, and helps spread fake news.

Neuroscience technologies are more powerful today than in the past and provide computational neuroscientists, such as those at Google’s DeepMind, with data to help develop models and algorithms based on the brain’s hippocampal network. Doing so could increase the learning speed, general intelligence, and problem-solving capabilities of AI to predict user behavior in the Metaverse.

There is a tremendous opportunity for AI to find new theories of the human mind that we may never be able to do on our own. However, given privacy issues with tech companies like Meta and Google to sway election results, providing the NSA with direct server access to spy on Americans, and funding global misinformation, increased access to our minds by such corporations will not end well.

Like AI algorithms, human memory is deeply intertwined with creating models of our potential futures, that is, imagination. If social media companies base their AI technologies on our memory and reward networks to predict and drive our behavior, they may overwhelmingly impact how we access our memories to imagine the future. As George Orwell aptly stated: Who controls the past controls the future.

Read more articles by Joshua Sariñana, here >>

--

--

Joshua Sariñana

Joshua Sariñana is a neuroscientist, artist, and writer. He’s provided expertise for WIRED Magazine and MIT Technology Review. www.joshuasarinana.com