NS/ Neuralink implants brain chip in first human

Paradigm
Paradigm
Published in
36 min readJan 30, 2024

Neuroscience biweekly vol. 102, 17th January — 31st January

TL;DR

  • The first human patient has received an implant from brain-chip startup Neuralink on Sunday and is recovering well, the company’s billionaire founder Elon Musk said. “Initial results show promising neuron spike detection,” Musk said in a post on the social media platform X on Monday.
  • The six anatomical layers of the mammalian brain cortex show distinct patterns of electrical activity that are consistent throughout the entire cortex and across several animal species, including humans, a study has found.
  • A groundbreaking study has unveiled a significant link between anxiety disorders and a brain receptor known as TACR3, as well as testosterone.
  • Think of a time when you had two different but similar experiences in a short period. Maybe you attended two-holiday parties in the same week or gave two presentations at work. Shortly afterward, you may find yourself confusing the two, but as time goes on that confusion recedes and you are better able to differentiate between these different experiences. New research reveals that this process occurs on a cellular level, findings that are critical to the understanding and treatment of memory disorders, such as Alzheimer’s disease.
  • A team of physicians, neuroscientists and engineers demonstrated two new strategies that use deep brain stimulation to improve the symptoms of Parkinson’s disease. By simultaneously targeting two key brain structures and using a novel self-adjusting device, the team showed that they could efficiently target and improve disruptive symptoms caused by the movement disorder.
  • A ribbon of brain tissue called cortical gray matter grows thinner in people who go on to develop dementia, and this appears to be an accurate biomarker of the disease five to 10 years before symptoms appear, researchers from The University of Texas Health Science Center at San Antonio reported.
  • A team of scientists has demonstrated that communication among memory-coding neurons — nerve cells in the brain responsible for maintaining working memory — is disrupted with aging and that this can begin in middle age.
  • Electrical deep brain stimulation (DBS) is a well-established method for treating disordered movement in Parkinson’s disease. However, implanting electrodes in a person’s brain is an invasive and imprecise way to stimulate nerve cells. Researchers report a new application for the technique, called magnetogenetics, that uses very small magnets to wirelessly trigger specific, gene-edited nerve cells in the brain. The treatment effectively relieved motor symptoms in mice without damaging surrounding brain tissue.
  • Violinists, surgeons and gamers can benefit from physical exercise both before and after practicing their new skills. The same holds for anyone seeking to improve their fine motor skills.
  • KAIST researchers announced to have identified the principle by which musical instincts emerge from the human brain without special learning using an artificial neural network model.
  • Recent advances in generative AI help to explain how memories enable us to learn about the world, re-live old experiences and construct new experiences for imagination and planning, according to a new study.

Neuroscience market

The global neuroscience market size was valued at USD 28.4 billion in 2016 and it is expected to reach USD 38.9 billion by 2027.

The latest news and research

Elon Musk’s Neuralink implants brain chip in first human

The first human patient has received an implant from brain-chip startup Neuralink on Sunday and is recovering well, the company’s billionaire founder Elon Musk said. “Initial results show promising neuron spike detection,” Musk said in a post on the social media platform X on Monday.

Spikes are activity by neurons, which the National Institute of Health describes as cells that use electrical and chemical signals to send information around the brain and to the body.

The U.S. Food and Drug Administration had given the company clearance last year to conduct its first trial to test its implant on humans, a critical milestone in the startup’s ambitions to help patients overcome paralysis and a host of neurological conditions.

In September, Neuralink said it received approval for recruitment for the human trial.

The study uses a robot to surgically place a brain-computer interface (BCI) implant in a region of the brain that controls the intention to move, Neuralink said previously, adding that its initial goal is to enable people to control a computer cursor or keyboard using their thoughts alone.

A ubiquitous spectrolaminar motif of local field potential power across the primate cortex

by Diego Mendoza-Halliday, Alex James Major, Noah Lee, Maxwell J. Lichtenfeld, Brock Carlson, Blake Mitchell, Patrick D. Meng, Yihan Xiong, Jacob A. Westerberg, Xiaoxuan Jia, Kevin D. Johnston, Janahan Selvanayagam, Stefan Everling, Alexander Maier, Robert Desimone, Earl K. Miller, André M. Bastos in Nature Neuroscience

Throughout the brain’s cortex, neurons are arranged in six distinctive layers, which can be readily seen with a microscope. A team of MIT neuroscientists has now found that these layers also show distinct patterns of electrical activity, which are consistent over many brain regions and across several animal species, including humans.

The researchers found that in the topmost layers, neuron activity is dominated by rapid oscillations known as gamma waves. In the deeper layers, slower oscillations called alpha and beta waves predominate. The universality of these patterns suggests that these oscillations are likely playing an important role across the brain, the researchers say.

“When you see something that consistent and ubiquitous across cortex, it’s playing a very fundamental role in what the cortex does,” says Earl Miller, the Picower Professor of Neuroscience, a member of MIT’s Picower Institute for Learning and Memory, and one of the senior authors of the new study.

Imbalances in how these oscillations interact with each other may be involved in brain disorders such as attention deficit hyperactivity disorder, the researchers say.

“Overly synchronous neural activity is known to play a role in epilepsy, and now we suspect that different pathologies of synchrony may contribute to many brain disorders, including disorders of perception, attention, memory, and motor control. In an orchestra, one instrument played out of synchrony with the rest can disrupt the coherence of the entire piece of music,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research and one of the senior authors of the study.

André Bastos, an assistant professor of psychology at Vanderbilt University, is also a senior author of the open-access paper, which appears in Nature Neuroscience. The lead authors of the paper are MIT research scientist Diego Mendoza-Halliday and MIT postdoc Alex Major.

Laminar recording methods and laminar differences in LFP oscillatory power. a, Inflated cortical surface of the macaque brain showing cortical areas recorded depicted using Caret software60 on the F99 template brain and using Lewis and van Essen61 area parcellation scheme. b, Structural MRI nearly-coronal section of one monkey from study 2 showing recording chamber grid (top) and location of areas MT, MST, 7A, 5, MIP and LIP on the right hemisphere. Yellow lines show the locations of example probes in all areas. c, Nissl section from the same monkey corresponding to a ×10 magnification of the black rectangular region in b with an example probe diagram showing the locations of recording channels (black dots) with respect to the cortical layers in area LIP. WM, white matter. d,g, Relative power as a function of frequency in a superficial-layer channel and a deep-layer channel from two example probes in areas LIP (d) and MT (g). e,h, Relative power maps for the two example probes. f,i, Relative power averaged in the alpha-beta (blue) and gamma (red) frequency bands as a function of laminar depth for the two example probes. Laminar depths are measured with respect to the alpha-beta/gamma crossover.

The human brain contains billions of neurons, each of which has its electrical firing patterns. Together, groups of neurons with similar patterns generate oscillations of electrical activity, or brain waves, which can have different frequencies. Miller’s lab has previously shown that high-frequency gamma rhythms are associated with encoding and retrieving sensory information, while low-frequency beta rhythms act as a control mechanism that determines which information is read out from working memory.

His lab has also found that in certain parts of the prefrontal cortex, different brain layers show distinctive patterns of oscillation: faster oscillation at the surface and slower oscillation in the deep layers. One study, led by Bastos when he was a postdoc in Miller’s lab, showed that as animals performed working memory tasks, lower-frequency rhythms generated in deeper layers regulated the higher-frequency gamma rhythms generated in the superficial layers.

In addition to working memory, the brain’s cortex also is the seat of thought, planning, and high-level processing of emotion and sensory information. Throughout the regions involved in these functions, neurons are arranged in six layers, and each layer has its distinctive combination of cell types and connections with other brain areas.

“The cortex is organized anatomically into six layers, no matter whether you look at mice or humans or any mammalian species, and this pattern is present in all cortical areas within each species,” Mendoza-Halliday says. “Unfortunately, a lot of studies of brain activity have been ignoring those layers because when you record the activity of neurons, it’s been difficult to understand where they are in the context of those layers.”

In the new paper, the researchers wanted to explore whether the layered oscillation pattern they had seen in the prefrontal cortex is more widespread, occurring across different parts of the cortex and species.

Using a combination of data acquired in Miller’s lab, Desimone’s lab, and labs from collaborators at Vanderbilt, the Netherlands Institute for Neuroscience, and the University of Western Ontario, the researchers were able to analyze 14 different areas of the cortex, from four mammalian species. This data included recordings of electrical activity from three human patients who had electrodes inserted in the brain as part of a surgical procedure they were undergoing.

Recording from individual cortical layers has been difficult in the past because each layer is less than a millimeter thick, so it’s hard to know which layer an electrode is recording from. For this study, electrical activity was recorded using special electrodes that record from all of the layers at once, then feed the data into a new computational algorithm the authors designed, termed FLIP (frequency-based layer identification procedure). This algorithm can determine which layer each signal came from.

“More recent technology allows recording of all layers of cortex simultaneously. This paints a broader perspective of microcircuitry and allowed us to observe this layered pattern,” Major says. “This work is exciting because it is both informative of a fundamental microcircuit pattern and provides a robust new technique for studying the brain. It doesn’t matter if the brain is performing a task or at rest and can be observed in as little as five to 10 seconds.”

Across all species, in each region studied, the researchers found the same layered activity pattern.

“We did a mass analysis of all the data to see if we could find the same pattern in all areas of the cortex, and voilà, it was everywhere. That was a real indication that what had previously been seen in a couple of areas was representing a fundamental mechanism across the cortex,” Mendoza-Halliday says.

The findings support a model that Miller’s lab has previously put forth, which proposes that the brain’s spatial organization helps it to incorporate new information, which carried by high-frequency oscillations, into existing memories and brain processes, which are maintained by low-frequency oscillations. As information passes from layer to layer, input can be incorporated as needed to help the brain perform particular tasks such as baking a new cookie recipe or remembering a phone number.

“The consequence of a laminar separation of these frequencies, as we observed, may be to allow superficial layers to represent external sensory information with faster frequencies, and for deep layers to represent internal cognitive states with slower frequencies,” Bastos says. “The high-level implication is that the cortex has multiple mechanisms involving both anatomy and oscillations to separate ‘external’ from ‘internal’ information.”

Under this theory, imbalances between high- and low-frequency oscillations can lead to either attention deficits such as ADHD, when the higher frequencies dominate and too much sensory information gets in, or delusional disorders such as schizophrenia, when the low-frequency oscillations are too strong and not enough sensory information gets in.

“The proper balance between the top-down control signals and the bottom-up sensory signals is important for everything the cortex does,” Miller says. “When the balance goes awry, you get a wide variety of neuropsychiatric disorders.”

The researchers are now exploring whether measuring these oscillations could help to diagnose these types of disorders. They are also investigating whether rebalancing the oscillations could alter behavior — an approach that could one day be used to treat attention deficits or other neurological disorders, the researchers say.

The researchers also hope to work with other labs to characterize the layered oscillation patterns in more detail across different brain regions.

“Our hope is that with enough of that standardized reporting, we will start to see common patterns of activity across different areas or functions that might reveal a common mechanism for computation that can be used for motor outputs, for vision, for memory and attention, et cetera,” Mendoza-Halliday says.

Interplay between hippocampal TACR3 and systemic testosterone in regulating anxiety-associated synaptic plasticity

by Magdalena Natalia Wojtas, Marta Diaz-González, Nadezhda Stavtseva, Yuval Shoam, Poonam Verma, Assaf Buberman, Inbar Izhak, Aria Geva, Roi Basch, Alberto Ouro, Lucia Perez-Benitez, Uri Levy, Erika Borcel, Ángel Nuñez, Cesar Venero, Noa Rotem-Dai, Isana Veksler-Lublinsky, Shira Knafo in Molecular Psychiatry

A groundbreaking study has unveiled a significant link between anxiety disorders and a brain receptor known as TACR3, as well as testosterone.

Prof. Shira Knafo, head of the Molecular Cognitive Lab at Ben-Gurion University, led the research published last month in the journal Molecular Psychiatry.

Anxiety is a common response to stress, but for those dealing with anxiety disorders, it can significantly impact daily life. Clinical evidence has hinted at a close connection between low testosterone levels and anxiety, particularly in men with hypogonadism, a condition characterized by reduced sexual function.

However, the precise nature of this relationship has remained unclear until now. Prof. Knafo discovered male rodents exhibiting exceedingly high anxiety levels had notably lower levels of a specific receptor called TACR3 in their hippocampus. The hippocampus is a brain region closely associated with learning and memory processes.

TACR3 is part of the tachykinin receptor family and responds to a substance known as neurokinin.

Analysis of hippocampal gene expression in rats with diverse anxiety-like behaviors. a Experimental design. Rats were categorized based on their performance in the elevated plus maze (EPM), and two weeks later, their hippocampus was extracted for gene expression analysis. b Classification of rats in the EPM. Left: Representative traces from the EPM showing the path (left) and color-coded time spent in each location of the maze (right) by rats categorized with moderate (MA) or severe anxiety (SA). Right: Frequency distribution of the EPM scores for all rats: the rats with extreme scores, indicated in color, were selected for gene expression analysis. c Volcano plot of the differential gene expression in MA and SA rats. Upregulated genes are shown in red, downregulated genes in blue, and non-significantly changed genes in gray, based on their statistical significance (-log10 p-value) and fold change (log2 fold change) values. d Hierarchical clustering was performed on eight samples (four with SA and four with MA) using Euclidean distances calculated from the expression of 172 differentially expressed genes (DEGs). The clustering analysis resulted in the formation of distinct clusters, and the colors in the heat map represent row-scaled expression values, with blue indicating weak expression and red indicating strong expression. Dot plots illustrating the enriched (e) KEGG pathways, (f) GO biological processes, and (g) GO cellular components associated with the DEGs. Each dot’s position on the x-axis represents the number of genes out of the 172 DEGs enriched for the corresponding term displayed on the y-axis. The dot’s size and color indicate the GeneRatio (proportion of DEGs within the pathway/process/component out of the 143 DEGs found in the DAVID database) and the level of significance, respectively. The terms are ordered based on the number of DEGs on the x-axis. Terms with an FDR (False Discovery Rate) < 0.1 or containing TACR3 or CAMK2B genes are marked [26] (For a comprehensive list of genes, see 10.5281/zenodo.8305270).

This observation piqued the researchers’ curiosity and was the foundation for an in-depth investigation into the link between TACR3 deficiency, sex hormones, anxiety, and synaptic plasticity.

The rodents were classified based on their behavior in a standard elevated plus maze test measuring anxiety levels.

Subsequently, their hippocampi were isolated and underwent gene expression analysis to identify genes with varying expressions between rodents with extremely low anxiety and those with severe anxiety.

One gene that stood out was TACR3. Previous research had revealed that mutations in genes associated with TACR3 led to a condition known as “congenital hypogonadism,” resulting in reduced sex hormone production, including testosterone.

Notably, young men with low testosterone often experienced delayed sexual development, accompanied by depression and heightened anxiety. This pairing led researchers to investigate the role of TACR3 further.

Prof. Knafo and her team were aided in their research by two innovative tools they crafted themselves.

The first, known as FORTIS, detects changes in receptors critical for neuronal communication within living neurons. By utilizing FORTIS, they demonstrated that inhibiting TACR3 resulted in a sharp increase in these receptors on the cell surface, blocking the parallel process of long-term synaptic strengthening, known as LTP.

The second pioneering tool employed was a novel application of cross-correlation to measure neuronal connectivity within a multi-electrode array system. This tool played a pivotal role in uncovering the profound impact of TACR3 manipulations on synaptic plasticity.

Synaptic plasticity refers to the ability of synapses, the connections between brain cells, to change their strength and efficiency. This dynamic process is fundamental for the brain’s adaptation to the environment. Through synaptic plasticity, the brain can reorganize its neural circuitry in response to new experiences. This flexibility allows for the modification of synaptic connections, enabling neurons to strengthen or weaken their communication over time.

Essentially, synaptic plasticity is a key mechanism by which the brain encodes and stores information, adapting continuously to the ever-changing external stimuli and internal states.

Importantly, it revealed that deficiencies stemming from TACR3 inactivity could be efficiently rectified through testosterone administration, offering hope for novel approaches to address challenges related to anxiety associated with testosterone deficiency.

TACR3 is seemingly a central player in bridging anxiety and testosterone.

The researchers have unraveled the complex mechanisms behind anxiety and opened avenues for novel therapies, including testosterone treatments, that could improve the quality of life for individuals grappling with sexual development disorders and associated anxiety and depression.

Dynamic and selective engrams emerge with memory consolidation

by Douglas Feitosa Tomé, Ying Zhang, Tomomi Aida, Olivia Mosto, Yifeng Lu, Mandy Chen, Sadra Sadeh, Dheeraj S. Roy, Claudia Clopath in Nature Neuroscience

Think of a time when you had two different but similar experiences in a short period. Maybe you attended two holiday parties in the same week or gave two presentations at work. Shortly afterward, you may find yourself confusing the two, but as time goes on that confusion recedes and you are better able to differentiate between these different experiences.

New research published in Nature Neuroscience, reveals that this process occurs on a cellular level, findings that are critical to the understanding and treatment of memory disorders, such as Alzheimer’s disease.

The research focuses on engrams, which are neuronal cells in the brain that store memory information. “Engrams are the neurons that are reactivated to support memory recall,” says Dheeraj S. Roy, PhD, one of the paper’s senior authors and an assistant professor in the Department of Physiology and Biophysics in the Jacobs School of Medicine and Biomedical Sciences at the University at Buffalo. “When engrams are disrupted, you get amnesia.”

In the minutes and hours that immediately follow an experience, he explains, the brain needs to consolidate the engram to store it.

“We wanted to know: What is happening during this consolidation process? What happens between the time that an engram is formed and when you need to recall that memory later?”

The researchers developed a computational model for learning and memory formation that starts with sensory information, which is the stimulus. Once that information gets to the hippocampus, the part of the brain where memories form, different neurons are activated, some of which are excitatory and others that are inhibitory.

When neurons are activated in the hippocampus, not all are going to be firing at once. As memories form,neurons that happen to be activated closely in time become a part of the engram and strengthen their connectivity to support future recall.

“Activation of engram cells during memory recall is not an all or none process but rather typically needs to reach a threshold (i.e., a percentage of the original engram) for efficient recall,” Roy explains. “Our model is the first to demonstrate that the engram population is not stable: The number of engram cells that are activated during recall decreases with time, meaning they are dynamic in nature, and so the next critical question was whether this had a behavioral consequence.”

Memory consolidation renders engrams dynamic and selective. a, Schematic of computational model. Left, stimulus population (Stim) and hippocampus network with excitatory (Exc) and inhibitory (Inh) neurons. Right, plasticity of feedforward and recurrent synapses (Methods). b, Schematic of simulation protocol (Methods). c, Schematic of training and novel stimuli with corresponding partial cues for recall. d, Overlap between the training stimulus and each novel stimulus in c as a fraction of training stimulus neurons. e–k, Means and 99% confidence intervals are shown. n = 10 trials. e, Post-encoding evolution of engram cells. Ensemble overlap between engram cells activated during both probing and training as a fraction of training-activated engram cells (left), probing-activated engram cells (middle) and all neurons in the network (right). f, Ensemble of engram cells as a fraction of all neurons. Dashed line indicates engram cell ensemble at the end of training. g, Ensemble overlap between probing-activated engram cells at consolidation time = t and t − 1 h as a fraction of engram cells at consolidation time = t − 1 h. Dashed line indicates ensemble of neurons that remained part of the engram in all sampled time points (that is, consolidation time = 0, 1, …, 24 h) as a fraction of engram cells at consolidation time = 0 h (that is, training-activated engram cells). h, Firing rate of engram cells averaged across all cue presentations during recall as a function of consolidation time (Methods). Dashed line indicates threshold ζthr = 10 Hz for engram cell activation. Color denotes stimulus as in c. i, Memory recall as a function of consolidation time. Color denotes stimulus as in c. j, Discrimination index between recall evoked by cues of the training stimulus and individual novel stimuli as a function of consolidation time (Methods). Color denotes stimulus as in c. k, Fraction of probing-activated engram cells reactivated during recall as a function of consolidation time. Color denotes stimulus as in c. l, Mean weight strength of plastic synapses clustered according to engram cell status (that is, engram and non-engram cells). Top, feedforward excitatory synapses onto excitatory neurons. Middle, recurrent excitatory synapses onto excitatory neurons. Bottom, recurrent inhibitory synapses onto excitatory neurons. Left, at the end of the training phase. Right, after 24 h of consolidation. Representative trial is shown.

“Over the consolidation period after learning, the brain is actively working to separate the two experiences and that’s possibly one reason why the numbers of activated engram cells decrease over time for a single memory,” he says. “If true, this would explain why memory discrimination gets better as time goes on. It’s like your memory of the experience was one big highway initially but over time, over the course of the consolidation period on the order of minutes to hours, your brain divides them into two lanes so you can discriminate between the two.”

Roy and the experimentalists on the team now had a testable hypothesis, which they carried out using a well-established behavioral experiment with mice. Mice were briefly exposed to two different boxes that had unique odors and lighting conditions; one was a neutral environment but in the second box, they received a mild foot shock.

A few hours after that experience, the mice, who typically are constantly moving, exhibited fear memory recall by freezing when exposed to either box.

“That demonstrated that they couldn’t discriminate between the two,” Roy says. “But by hour twelve, all of a sudden, they exhibited fear only when they were exposed to the box where they were uncomfortable during their very first experience. They were able to discriminate between the two. The animal is telling us that they know this box is the scary one but five hours earlier they couldn’t do that.”

Using a light-sensitive technique, the team was able to detect active neurons in the mouse hippocampus as the animal was exploring the boxes. The researchers used this technique to tag active neurons and later measure how many were reactivated by the brain for recall. They also conducted experiments that allowed a single engram cell to be tracked across experiences and time.

“So I can tell you literally how one engram cell or a subset of them responded to each environment across time and correlate this to their memory discrimination,” explains Roy.”

The team’s initial computational studies had predicted that the number of engram cells involved in a single memory would decrease over time, and the animal experiments bore that out.

“When the brain learns something for the first time, it doesn’t know how many neurons are needed and so on purpose a larger subset of neurons is recruited,” he explains. “As the brain stabilizes neurons, consolidating the memory, it cuts away the unnecessary neurons, so fewer are required and in doing so helps separate engrams for different memories.”

The findings have direct relevance to understanding what is going wrong in memory disorders, such as Alzheimer’s disease. Roy explains that to develop treatments for such disorders, it is critical to know what is happening during the initial memory formation, consolidation and activation of engrams for recall.

“This research tells us that a very likely candidate for why memory dysfunction occurs is that there is something wrong with the early window after memory formation where engrams must be changing,” says Roy.

He is currently studying mouse models of early Alzheimer’s disease to find out if engrams are forming but not being correctly stabilized. Now that more is known about how engrams work to form and stabilize memories, researchers can examine which genes are changing in the animal model when the engram population decreases.

“We can look at mouse models and ask, are there specific genes that are altered? And if so, then we finally have something to test, we can modulate the gene for these refinement’ or ‘consolidation’ processes of engrams to see if that has a role in improving memory performance,” he says.

Now at the Jacobs School, Roy conducted the research while a McGovern Fellow at the Broad Institute of Massachusetts Institute of Technology (MIT) and Harvard University. Roy is one of three neuroscientists recruited to the Jacobs School this year to launch a new focus on systems neuroscience in the school’s Department of Physiology and Biophysics.

At home adaptive dual target deep brain stimulation in Parkinson disease with proportional control

by Stephen L Schmidt, Afsana H Chowdhury, Kyle T Mitchell, Jennifer J Peters, Qitong Gao, Hui-Jie Lee, Katherine Genty, Shein-Chung Chow, Warren M Grill, Miroslav Pajic, Dennis A Turner in Brain

A team of physicians, neuroscientists and engineers at Duke University has demonstrated two new strategies that use deep brain stimulation to improve the symptoms of Parkinson’s disease.

By simultaneously targeting two key brain structures and using a novel self-adjusting device, the team showed that they could efficiently target and improve disruptive symptoms caused by the movement disorder.

For the past 20 years, physicians have prescribed deep brain stimulation, or DBS, to treat the symptoms of advanced Parkinson’s disease when medication alone will no longer work. The technique uses a device similar to a pacemaker to deliver electric impulses to key areas within the brain. This targeted stimulation can reduce tremors and stiffness and limit the involuntary, writhing movements that develop after years of medication.

While DBS has proven to be an effective therapy to address these symptoms, it isn’t perfect, and physicians and researchers continue to explore ways to make improvements.

“Physicians place the electrodes for DBS in either the subthalamic nucleus or the globus pallidus, which are two structures in the brain closely associated with movement,” said senior author Dennis Turner, professor of neurosurgery, neurobiology, and biomedical engineering at the Duke University School of Medicine, who conceived and organized the research and assembled the interdisciplinary team.

“There are benefits to both locations on their own depending on the patient’s symptoms,” Turner said, “but we believed placing the electrodes at both locations could be complementary and help reduce medication doses and side effects, as well as implement a completely new approach to adaptive DBS.”

Beyond increasing the area of stimulation, the team wanted to explore whether a technique called adaptive DBS could make their system more efficient. In traditional DBS, a physician sets key electrical parameters, like the amplitude, pulse frequency and pulse duration, to best treat symptoms while minimizing side effects. Those parameters may stay the same for days, weeks, months and even years, depending on the patient’s response.

But according to Warren Grill, the Edmund T. Pratt, Jr. School Distinguished Professor of Biomedical Engineering at Duke, these unchanging parameters are far from optimal.

“The amount of stimulation a person living with Parkinson’s needs changes, depending on their medications or activity levels. A patient will need more stimulation if they are walking their daughter down the aisle at her wedding than if they are just watching TV,” Grill said. An adaptive system is “like a smart thermostat in your office that makes adjustments based on the temperature outside.”

To implement their bespoke approach, the team worked with experimental technology provided by the medical device company Medtronic to create their own adaptive DBS techniques. By programming the device to sense and record key biomarkers and brain activity in the patient, the researchers developed a system that can adjust the parameters of stimulation automatically to provide optimal symptom relief throughout the day.

The team tested their strategies in a clinical trial at Duke University Medical Center with six patients between the ages of 55 and 65. Each had varying symptoms of Parkinson’s disease.

First, the researchers spent two years observing and testing the efficacy of stimulating both the subthalamic nucleus and the globus pallidus with the standard, continuous DBS. The results were measured using a combination of patient feedback, tracking the amount of time a patient could move without experiencing involuntary movements and recording how much a patient could reduce their medication without experiencing symptoms.

During this period, the team also ran experiments to establish the parameters for an adaptable DBS system. The team studied a specific frequency of brain activity, called beta oscillations, in the subthalamic nucleus. Previous research had shown that a high level of beta oscillations is linked to the slow, halting movement seen in most cases of Parkinson’s.

“We were able to test different levels of stimulation to determine the optimal levels of beta oscillations that would improve symptoms under different circumstances,” said Stephen Schmidt, a research and development engineer in the Grill lab. “This helped us establish the initial settings for the adaptive DBS and allowed us to compare how the adaptive and standard DBS operated in a home setting.”

After two years of study with the adaptive system, the team had their results.

They found that targeting the subthalamic nucleus and the globus pallidus at the same time improved motor symptoms more than targeting either region alone. And they found that the adaptive DBS applied less stimulation but was just as effective as dual-target continuous DBS in both clinical and home settings.

“Clinically, the patients are doing phenomenally. Looking at their rating scales, they are doing better than the average DBS patient when both target areas are stimulated,” said Kyle Mitchell, Assistant Professor of Neurology at DUSM. “We’re not only seeing excellent clinical responses to dual target stimulation, but we’re also able to integrate this adaptive, smart tool into the brain that can at least match this clinical response. It’s very exciting.”

Spurred on by their initial success, the team plans to further optimize adaptive deep brain stimulation and pursue additional testing for the next stage of their clinical trials.

“This tool has great potential down the road for making DBS a more tailored and elegant therapy,” said Grill. “This is very promising research for the field of DBS, and it couldn’t have been done without the six participants who agreed to undergo this experimental work, as well as their families and caregivers. We are grateful for their significant contribution to this effort.”

A novel neuroimaging signature for ADRD risk stratification in the community

by Claudia L. Satizabal, Alexa S. Beiser, Evan Fletcher, Sudha Seshadri, Charles DeCarli in Alzheimer’s & Dementia

A ribbon of brain tissue called cortical gray matter grows thinner in people who go on to develop dementia, and this appears to be an accurate biomarker of the disease five to 10 years before symptoms appear, researchers from The University of Texas Health Science Center at San Antonio (also called UT Health San Antonio) reported.

The researchers, working with colleagues from The University of California, Davis, and Boston University, conducted an MRI brain imaging study published in Alzheimer’s & Dementia: The Journal of the Alzheimer’s Association. They studied 1,000 Massachusetts participants in the Framingham Heart Study and 500 people from a California cohort.

The California volunteers included 44% representation of Black and Hispanic participants, whereas the Massachusetts cohort was predominantly non-Hispanic white.

Both cohorts were 70 to 74 years of age on average at the time of MRI studies.

“The big interest in this paper is that, if we can replicate it in additional samples, cortical gray matter thickness will be a marker we can use to identify people at high risk of dementia,” said study lead author Claudia Satizabal, PhD, of UT Health San Antonio’s Glenn Biggs Institute for Alzheimer’s and Neurodegenerative Diseases.

“By detecting the disease early, we are in a better time window for therapeutic interventions and lifestyle modifications, and to do better tracking of brain health to decrease individuals’ progression to dementia.”

Repeating the Framingham findings in the more-diverse California cohort “gives us confidence that our results are robust,” Satizabal said.

While dementias can affect different brain regions, Alzheimer’s disease and frontotemporal dementia impact the cortex, and Alzheimer’s is the most common type of dementia.

The study compared participants with and without dementia at the time of MRI.

“We went back and examined the brain MRIs done 10 years earlier, and then we mixed them up to see if we could discern a pattern that reliably distinguished those who later developed dementia from those who did not,” said co-author Sudha Seshadri, MD, director of the Glenn Biggs Institute at UT Health San Antonio and senior investigator with the Framingham Heart Study.

“This kind of study is only possible when you have longitudinal follow-up over many years as we did at Framingham and as we are building in San Antonio,” Seshadri said. “The people who had the research MRI scans while they were well and kept coming back to be studied are the selfless heroes who make such valuable discoveries, such prediction tools possible.”

The results were consistent across populations. Thicker ribbons correlated with better outcomes and thinner ribbons with worse, in general.

“Although more studies are needed to validate this biomarker, we’re off to a good start,” Satizabal said.”The relationship between thinning and dementia risk behaved the same way in different races and ethnic groups.”

Clinical trial researchers could use the thinning biomarker to minimize cost by selecting participants who haven’t yet developed any disease but are on track for it, Seshadri said. They would be at greatest need to try investigational medications, she said.

The biomarker would also be useful to develop and evaluate therapeutics, Seshadri noted. Satizabal said the team plans to explore risk factors that may be related to the thinning. These include cardiovascular risk factors, diet, genetics and exposure to environmental pollutants, she said.

“We looked at APOE4, which is a main genetic factor related to dementia, and it was not related to gray matter thickness at all,” Satizabal said. “We think this is good, because if thickness is not genetically determined, then there are modifiable factors such as diet and exercise that can influence it.”

Could the MRI gray matter biomarker be used widely someday?

“A high proportion of people going to the neurologist get their MRI done, so this thickness value might be something that a neuroradiologist derives,” Seshadri said. “A person’s gray matter thickness might be analyzed as a percentile of the thickness of healthy people for that age.”

Functional alterations of the prefrontal circuit underlying cognitive aging in mice

by Huee Ru Chong, Yadollah Ranjbar-Slamloo, Malcolm Zheng Hao Ho, Xuan Ouyang, Tsukasa Kamigaki in Nature Communications

A team of scientists from Nanyang Technological University, Singapore (NTU Singapore) has demonstrated that communication among memory-coding neurons — nerve cells in the brain responsible for maintaining working memory — is disrupted with ageing and that this can begin in middle age.

Findings from the study, which was reported in Nature Communications, provide new insights into the ageing process of the human mind, and pave the way for therapies to maintain the mental well-being of an aging individual.

Scientists have long studied the impact of aging on the brain’s executive functions, such as poorer self-control and working memory. While it is well established that memory can worsen as people age, it has not been clear what changes occur at the individual brain neuron level to cause this — until now.

Previous studies used nerve cells from dead subjects, but the Lee Kong Chian School of Medicine (LKCMedicine) team measured the real-time activity of individual nerve cells in live mice. To make these measurements, the team adopted a recently unveiled optical imaging technique that allowed them to understand the function of each neuron by measuring its neural activity in the context of working memory.

In lab experiments, the NTU scientists investigated how neurons in mice of three different age groups — young, middle age, and old age — responded to tasks that required memory.

The researchers showed that compared to young mice, middle-aged and old mice required more training sessions to learn new tasks, indicating some decline in memory and learning abilities from middle age. But beyond that, they also found changes in the nerve cells of older mice.

Using advanced optical techniques (calcium imaging and optogenetic manipulation) that allow researchers to observe multiple individual neurons and manipulate their activity, the NTU team discovered that neurons in one part of the brain, the prefrontal cortex, showed robust memory coding ability in young mice. However, this ability to hold memory diminishes in middle-aged and old mice due to weakening connections among the neurons, which causes the mice to take longer to recall and perform tasks.

While scientists know connections between neurons are crucial for storing memory, it has not previously been experimentally demonstrated in the live brain how aging brain cell changes cause weakening connections.

The findings thus suggest that strengthening the weakened connections between the nerve cells, such as through memory training activities, could help delay the deterioration of people’s working memories as they age.

Lead investigator and Assistant Professor Tsukasa Kamigaki from NTU’s LKCMedicine said, “Our study highlights a significant reduction in communication among neurons responsible for encoding memories in the prefrontal cortex — a key factor in age-related working memory decline, which was a neurological process not widely understood until now. This discovery provides more evidence that proactive intervention can improve neuron communication. Examples of intervention include lifestyle changes such as cognitive training and regular exercise. These activities can potentially mitigate the impact of cognitive aging and enhance people’s overall cognitive health as they age.”

Aging effects on behavioral performance of the bimodal delayed 2-AFC task. a Schematic of the task design. Mouse cartoons modified from the drawing 73. b, c Number of daily sessions to reach a behavioral criterion (b) and correct rate during the imaging sessions © (n = 11 young mice, n = 7 middle-aged mice, n = 5 advanced-aged mice). Open circles, individual mice. *U = 81, p = 0.032, **U = 72.5, p = 0.015, in (b), *U = 59, p = 0.030 in ©, Wilcoxon rank-sum test. Box plots show the median and the 25th and 75th percentiles as box edges and the whiskers extend to the 5th and 95th percentiles. Y young; M Middle age, A advanced age. Source data are provided as a Source Data file.

Further experiments also showed that the weakening connections between the nerve cells led to instability of neural circuits in the prefrontal cortex from as early as middle age, resulting in poorer ability to hold memory.

The NTU team used optogenetic technology — a method that uses genetically engineered light-sensitive ion channels in neurons that enables the control of neuronal activity through light stimulation — to briefly turn off neurons in the brain for one to two seconds and found that the working memory circuits in middle-aged mice are particularly sensitive to the short interruptions in neural activity.

Co-first author and LKCMedicine Research Assistant Huee Ru Chong said, “Our four-year study shows that the ongoing function of the prefrontal circuits is critical for memory tasks. The fact that the brain circuits showed signs of degradation from middle age highlights the need for clinical strategies to safeguard our mental well-being as early as possible.”

Co-first author and LKCMedicine Research Fellow Dr Yadollah Ranjbar-Slamloo said, “We found that the prefrontal cortex in mice stays active when they remember things, like humans. The finding suggests that mice could be a good model for studying how memory works and its aging process. Our findings, therefore, indicate that just as in mice, our brain may start to degrade early on as we age.”

LKCMedicine Associate Professor Nagaendran Kandiah, Visiting Senior Consultant Neurologist at Singapore’s National University Hospital and Khoo Teck Puat Hospital, who is not involved in the study, said, “In humans, the prefrontal cortex plays a key role in organization, retention, and retrieval of memory. The exciting findings from the NTU team provide insights into specific neural changes in the prefrontal cortex associated with ageing. This new knowledge will be of huge clinical relevance in designing cognitive interventions to delay age-related memory decline.”

Commenting as an independent expert, Dr Jun Nishiyama, Assistant Professor in the Neuroscience and Behavioural Disorders programme at Duke-NUS Medical School said:

“It is well-known that brain performance declines with aging, yet the underlying causes remained elusive. This groundbreaking study from NTU Singapore offers key neurological insights into age-related working memory decline, highlighting reduced neuronal communication in the mouse prefrontal cortex beginning from middle age. This research emphasises the importance of early, strategic interventions to combat cognitive decline, providing a vital framework for future aging research and brain health maintenance.”

The next steps for this project are to investigate more brain-wide neural changes that occur during middle age to understand how proactive interventions may enhance communication among different brain areas.

Nanoscale Magneto-mechanical-genetics of Deep Brain Neurons Reversing Motor Deficits in Parkinsonian Mice

by Wookjin Shin, Yeongdo Lee, Jueun Lim, Youbin Lee, Jungsu David Lah, Somin Lee, Jung-uk Lee, Ri Yu, Phil Hyu Lee, Jae-Hyun Lee, Minsuk Kwak, Jinwoo Cheon in Nano Letters

Electrical deep brain stimulation (DBS) is a well-established method for treating disordered movement in Parkinson’s disease. However, implanting electrodes in a person’s brain is an invasive and imprecise way to stimulate nerve cells. Researchers report in ACS’ Nano Letters a new application for the technique, called magnetogenetics, that uses very small magnets to wirelessly trigger specific, gene-edited nerve cells in the brain. The treatment effectively relieved motor symptoms in mice without damaging surrounding brain tissue.

In traditional DBS, a battery pack externally sends electrical signals through wires, activating nerve cells in a region of the brain called the subthalamic nucleus (STN). STN activation can relieve motor symptoms of Parkinson’s disease, including tremors, slowness, rigidity and involuntary movements.

However, because the potential side effects, including brain hemorrhage and tissue damage, can be severe, DBS is usually reserved for people who have late-stage Parkinson’s disease or when symptoms are no longer manageable with medication.

In a step toward a less invasive treatment, Minsuk Kwak and Jinwoo Cheon worked with their colleagues to develop a wireless method to effectively reduce motor dysfunction in people with Parkinson’s disease.

For their wireless technique, the researchers tagged nanoscale magnets with antibodies to help the molecules “stick” to the surface of STN nerve cells.

Then they injected the sticky magnets into the brains of mice with early- and late-stage Parkinson’s disease.

Prior to the injection in the STN, those same nerve cells had been modified with a gene that caused them to activate when the modified magnets on the cell’s surface twisted in reaction to an externally applied magnetic field of about 25 milliteslas, which is about one-thousandth the strength of an MRI.

In demonstrations of the magnetized and modified neurons in mice with Parkinson’s disease, the mice exposed to a magnetic field showed improved motor function to levels comparable to those of healthy mice.

The team observed that mice that received multiple exposures to the magnetic field retained more than one-third of their motor improvements while mice that received one exposure retained almost no improvements.

Additionally, the nerve cells of treated mice showed no significant damage in and around the STN, which suggests this could be a safer alternative to traditional implanted DBS systems, the researchers say.

The team believes its wireless magnetogenetic approach has therapeutic potential and could be used to treat motor dysfunction in people with early- or late-stage Parkinson’s disease as well as other neurological disorders, such as epilepsy and Alzheimer’s disease.

Acute exercise performed before and after motor practice enhances the positive effects on motor memory consolidation

by Lasse Jespersen, Katrine Matlok Maes, Nicoline Ardenkjær-Skinnerup, Marc Roig, Jonas Rud Bjørndal, Mikkel Malling Beck, Jesper Lundbye-Jensen in Neurobiology of Learning and Memory

Violinists, surgeons and gamers can benefit from physical exercise both before and after practicing their new skills. The same holds true for anyone seeking to improve their fine motor skills. This is demonstrated by new research from the University of Copenhagen, which, among other things, can make the way we rehabilitate more effective.

Before a violinist tunes their instrument or surgeon stands at the training table to learn the skills needed for a new symphony or surgical procedure, they might consider heading out for a bike ride or run. Once they’ve practiced the new skill, there’s good reason to put on their workout attire again.

Indeed, being physically active and elevating one’s heart rate has the wonderful side effect of improving our ability to learn by increasing the brain’s ability to remember.

In a new study, researchers from the Department of Nutrition, Exercise and Sports have shown that this effect also applies to the formation of motor memory, enabling us to recall and perform tasks such as riding a bike, drive a car and lace up our shoes, almost automatically.

(A) Schematic illustration of the study design. Participants visited the laboratory on three separate occasions involving a screening session (session 1), the main experiment (session 2), and a long-term retention test (session 3). During the main experiment, participants practiced the SVAT task. Exercise intervals prior to motor practice were performed at moderate intensity (45 % of Wpeak). Exercise intervals following motor practice were performed at a high intensity (90 % of Wpeak). Rest conditions administered before or after motor practice consisted of seated rest on the bicycle. (B) Illustration of SVAT task. Participants controlled a red cursor in the vertical direction by increasing or decreasing the force applied to a load cell with their thumb and index finger. By moving the red cursor up and down, the participants were instructed to track rectangular target boxes, which continuously appeared on the screen one at a time for two seconds. At baseline, immediately after practice, and at the 7-day retention, the participants performed two test blocks containing different target types. One followed a predefined sequential order (S), depicted by the blue arrows, and the other a pseudorandom non-sequential order (N). The order of sequential and non-sequential blocks was counterbalanced between subjects. During the six training blocks, participants only practiced sequential trials. Motor performance was quantified as the percentage of the time spent inside the targets. Online learning was used as a marker of memory encoding, offline learning was used as a marker of memory consolidation, and online + offline learning was used to quantify total learning.

“Our results demonstrate that there is a clear effect across the board. If you exercise before learning a skill, you will improve and remember what you have learned better. The same applies if you exercise after learning. But our research shows that the greatest effect is achieved if you exercise both before and after,” says PhD Lasse Jespersen, first author of the study.

Specifically, the researchers see around 10% improvement in people’s ability to remember learned motor skills when exercise is included either before or after an exercise. And, the effect can be enhanced by exercising at both times.

“Things can’t go wrong if a bit of physical exercise is incorporated. A person will experience beneficial effects. This is probably because physical activity increases the brain’s ability to change, which is a prerequisite for remembering,” explains co-author Jesper Lundbye-Jensen, who heads the department’s Movement and Neuroscience section.

The effect applies to everyone, including children, adolescents and older adults, but in particular, anyone who regularly needs to learn new skills. Moreover, the effects may hold significance for individuals undergoing rehabilitation, aiming to recover mobility and lost motor skills.

Sixty-seven test subjects were involved in the research project. To ensure for comparable data, all subjects were young men between the ages of 18 and 35 who were not physically or mentally impaired in ways that could limit their learning ability and physical performance.

The researchers examined the subjects’ behaviour and performance while reviewing one of four possible scenarios.

First, they either rested or exercised moderately on a bicycle. After that, they were subjected to a fine motor task in the form of a simple computer game that, with a small device on their fingertips, challenged and practiced the participants’ motor dexterity.

Next, they either had to exercise intensely on a fitness bike or rest. Thus, there was one group that rested both before and after, one that trained both times and two that trained once, either before or after. Their skill level and memory were tested again after seven days to assess whether what they had learned stuck.

As a somewhat unusual criterion, professional musicians and gamers were excluded as possible participants.

“People with extensive experience in practicing motor skills typically start at a different level. While the motor task used in the research study were unknown to all, involving experts would have changed the dynamic from the get go. But that doesn’t mean they wouldn’t benefit from the effects we’ve shown. To the contrary, in a future study, it could be exciting to investigate how exercise affects people with elite level fine motor skills,” says Lasse Jespersen.

The increased effect of motor learning is something everyone can benefit from. Children who are developing their motor skills are often highlighted, and previous studies with pianists have already shown that people with extraordinary motor skills also benefit from exercise.

At the other end of the spectrum, the new knowledge could make an important contribution as well. For example, among those needing to regain mobility after an accident.

“Typically, rehabilitation is divided between two or three different disciplines. In practice, this may mean that Mr. Smith will have physical training with a physiotherapist on one day, work with an ergonomist the next and train cognitive abilities with a psychologist on the third. Our research suggests that it could be wise to plan rehabilitation so that these areas are considered together, as doing so could have a synergistic effect,” explains Jesper Lundbye-Jensen, who points out:

“Coming back often entails hard work, and even slight improvements in efficiency can mean a lot to people in that situation.”

In the long term, the researchers hope to be able to provide such recommendations with more ammunition for a long-term study where more lasting effects can be measured. A longer-term study would also let the researchers investigate whether the effects observed by the study become even greater over a longer trial period.

Specific parts of the brain are activated when a person engages in motor practice that require the acquisition of fine motor skills.

If the task is an activity that one knows well, like riding a bicycle, the centers are less active, but that all changes when learning something new.

The brain undergoes actual changes which is essential for our ability to learn and remember new skills, a phenomenon known as brain plasticity. These changes occur both while the new skill is acquired through practice, but also in the hours after when the memory is consolidated. This is why it is meaningful to be physically active even after we’ve engaged in something new.

“In the study, we use the terms online and offline to describe these two aspects of learning — memory acquisition and retention. Both are important for us to acquire new motor skills and remember what we’ve learned,” Jesper Lundbye-Jensen explains.

Previous studies have also shown that physical exercise releases a number of neurotransmitters that have the side benefit of promoting the development in the brain that new learning has initiated. The researchers believe that this is the relationship that produces the beneficial effects.

Spontaneous emergence of rudimentary music detectors in deep neural networks

by Kim G, Kim DK, Jeong H. in Nature Communications

Music, often referred to as the universal language, is known to be a common component in all cultures. Then, could ‘musical instinct’ be something that is shared to some degree despite the extensive environmental differences amongst cultures? A KAIST research team led by Professor Hawoong Jung from the Department of Physics announced to have identified the principle by which musical instincts emerge from the human brain without special learning using an artificial neural network model.

Previously, many researchers have attempted to identify the similarities and differences between the music that exists in various different cultures, and tried to understand the origin of the universality. A paper published in Science in 2019 revealed that music is produced in all ethnographically distinct cultures, and that similar forms of beats and tunes are used. Neuroscientists have also previously found out that a specific part of the human brain, namely the auditory cortex, is responsible for processing musical information.

Professor Jung’s team used an artificial neural network model to show that cognitive functions for music form spontaneously as a result of processing auditory information received from nature, without being taught music. The research team utilized AudioSet, a large-scale collection of sound data provided by Google, and taught the artificial neural network to learn the various sounds. Interestingly, the research team discovered that certain neurons within the network model would respond selectively to music. In other words, they observed the spontaneous generation of neurons that reacted minimally to various other sounds like those of animals, nature, or machines, but showed high levels of response to various forms of music including both instrumental and vocal.

Distinct representation of music in deep neural networks trained for natural sound detection without music. a Example log-Mel spectrograms of the natural sound data in the AudioSet31. b Architecture of the deep neural network used to detect the natural sound categories in the input data. The purple box indicates the average pooling layer. c Performance (mean average precision, mAP) of the network trained without music for music-related categories (top, red bars) and other categories (bottom, blue). n = 5 independent networks. Error bars represent mean +/− SD. d Density plot of the t-SNE embedding of feature vectors obtained from the network in C. The lines represent iso-proportion lines at 80%, 60%, 40%, and 20% levels.

The neurons in the artificial neural network model showed similar reactive behaviors to those in the auditory cortex of a real brain. For example, artificial neurons responded less to the sound of music that was cropped into short intervals and were rearranged. This indicates that the spontaneously generated music-selective neurons encode the temporal structure of music. This property was not limited to a specific genre of music but emerged across 25 different genres including classic, pop, rock, jazz, and electronic.

Furthermore, suppressing the activity of the music-selective neurons was found to greatly impede the cognitive accuracy for other natural sounds. That is to say, the neural function that processes musical information helps process other sounds, and that ‘musical ability’ may be an instinct formed as a result of an evolutionary adaptation acquired to better process sounds from nature.

Professor Hawoong Jung, who advised the research, said, “The results of our study imply that evolutionary pressure has contributed to forming the universal basis for processing musical information in various cultures.” As for the significance of the research, he explained, “We look forward for this artificially built model with human-like musicality to become an original model for various applications including AI music generation, musical therapy, and for research in musical cognition.” He also commented on its limitations, adding, “This research however does not take into consideration the developmental process that follows the learning of music, and it must be noted that this is a study on the foundation of processing musical information in early development.”

A generative model of memory construction and consolidation

by Eleanor Spens, Neil Burgess in Nature Human Behaviour

Recent advances in generative AI help to explain how memories enable us to learn about the world, re-live old experiences and construct totally new experiences for imagination and planning, according to a new study by UCL researchers.

The study, published in Nature Human Behaviour and funded by Wellcome, uses an AI computational model — known as a generative neural network — to simulate how neural networks in the brain learn from and remember a series of events (each one represented by a simple scene).

The model featured networks representing the hippocampus and neocortex, to investigate how they interact.

Both parts of the brain are known to work together during memory, imagination and planning.

Lead author, PhD student Eleanor Spens (UCL Institute of Cognitive Neuroscience), said: “Recent advances in the generative networks used in AI show how information can be extracted from experience so that we can both recollect a specific experience and also flexibly imagine what new experiences might be like.

a, First, the hippocampus rapidly encodes an event, modelled as one-shot memorization in an autoassociative network (an MHN). Then, generative networks are trained on replayed representations from the autoassociative network, learning to reconstruct memories by capturing the statistical structure of experienced events. b, A more detailed schematic of the generative network to indicate the multiple layers of, and overlap between, the encoder and decoder (where layers closer to the sensory neocortex overlap more). The generation of a sensory experience, for example, visual imagery, requires the decoder to the sensory neocortex via HF. c, Random noise inputs to the MHN (top row) reactivate its memories (bottom row) after 10,000 items from the Shapes3D dataset are encoded, with five examples shown. d, The generative model (a variational autoencoder) can recall images (bottom row) from a partial input (top row), following training on 10,000 replayed memories sampled from the MHN. e, Episodic memory after consolidation: a partial input is mapped to latent variables whose return projections to the sensory neocortex via HF then decode these back into a sensory experience. f, Imagination: latent variables are decoded into an experience via HF and return projections to the neocortex. g, Semantic memory: a partial input is mapped to latent variables, which capture the ‘key facts’ of the scene. The bottom rows of e–g illustrate these functions in a model that has encoded the Shapes3D dataset into latent variables (v1, v2, v3, …, vn).

“We think of remembering as imagining the past based on concepts, combining some stored details with our expectations about what might have happened.”

Humans need to make predictions to survive (e.g. to avoid danger or to find food), and the AI networks suggest how, when we replay memories while resting, it helps our brains pick up on patterns from past experiences that can be used to make these predictions.

Researchers played 10,000 images of simple scenes to the model. The hippocampal network rapidly encoded each scene as it was experienced. It then replayed the scenes over and over again to train the generative neural network in the neocortex.

The neocortical network learned to pass the activity of the thousands of input neurons (neurons that receive visual information) representing each scene through smaller intermediate layers of neurons (the smallest containing only 20 neurons), to recreate the scenes as patterns of activity in its thousands of output neurons (neurons that predict the visual information).

This caused the neocortical network to learn highly efficient “conceptual” representations of the scenes that capture their meaning (e.g. the arrangements of walls and objects) — allowing both the recreation of old scenes and the generation of completely new ones.

Consequently, the hippocampus was able to encode the meaning of new scenes presented to it, rather than having to encode every single detail, enabling it to focus resources on encoding unique features that the neocortex couldn’t reproduce — such as new types of objects.

The model explains how the neocortex slowly acquires conceptual knowledge and how, together with the hippocampus, this allows us to “re-experience” events by reconstructing them in our minds.

The model also explains how new events can be generated during imagination and planning for the future, and why existing memories often contain “gist-like” distortions — in which unique features are generalized and remembered as more like the features in previous events.

Senior author, Professor Neil Burgess (UCL Institute of Cognitive Neuroscience and UCL Queen Square Institute of Neurology), explained: “The way that memories are re-constructed, rather than being veridical records of the past, shows us how the meaning or gist of an experience is recombined with unique details, and how this can result in biases in how we remember things.”

Subscribe to Paradigm!

Medium, Twitter, Telegram, Telegram Chat, LinkedIn, and Reddit.

Main sources

Research articles

Nature Neuroscience

Science Daily

Technology Networks

Neuroscience News

Frontiers

Cell

--

--