NS/ Neuroscientists decoded people’s thoughts using brain scans

Paradigm
Paradigm
Published in
28 min readMay 10, 2023

Neuroscience biweekly vol. 84, 26th April — 10th May

TL;DR

  • A new AI-based system called a semantic decoder can translate a person’s brain activity — while listening to a story or silently imagining telling a story — into a continuous stream of text. Unlike other thought decoding systems in development, this system does not require subjects to have surgical implants, making the process noninvasive.
  • In a paper published in Communications Biology, auditory neuroscientists at the University of Pittsburgh describe a machine learning model that helps explain how the brain recognizes the meaning of communication sounds, such as animal calls or spoken words.
  • A new study shows that sleep spindles, brief bursts of brain activity occurring during one phase of sleep and captured by EEG, may regulate anxiety in people with post-traumatic stress disorder (PTSD).:
  • Neuroscientists have uncovered how exploratory actions enable animals to learn their spatial environment more efficiently. Their findings could help build better AI agents that can learn faster and require less experience.
  • A new study provides early evidence of a surge of activity correlated with consciousness in the dying brain.
  • For adolescents who may get stuck in negative thought spirals, refocusing on mental imagery is a more effective distraction than verbal thoughts, a recent study from Oregon State University found.
  • Researchers report that neuronal activity is necessary and sufficient for astrocytes to develop their complex shape, and interrupting this developmental process results in disrupted brain function.
  • Hundreds of millions of years before the evolution of animals with segmented bodies, jointed skeletons or appendages, and soft-bodied invertebrates like sea slugs ruled the seas. A new study finds parallels between the brain architecture that drives locomotion in sea slugs and that of more complex segmented creatures with jointed skeletons and appendages.
  • Scientists found that blood markers of two saturated fatty acids along with certain omega-6, -7 and -9 fatty acids correlated with better scores on tests of memory and were associated with larger brain structures in the frontal, temporal, parietal and insular cortices.
  • New findings in color vision research imply that humans can perceive a greater range of blue tones than monkeys do. Distinct connections found in the human retina may indicate recent evolutionary adaptations for sending enhanced color vision signals from the eye to the brain.
  • And more!

Neuroscience market

The global neuroscience market size was valued at USD 28.4 billion in 2016 and it is expected to reach USD 38.9 billion by 2027.

The latest news and research

Semantic reconstruction of continuous language from non-invasive brain recordings

by Jerry Tang, Amanda LeBel, Shailee Jain, Alexander G. Huth in Nature Neuroscience

A new artificial intelligence system called a semantic decoder can translate a person’s brain activity — while listening to a story or silently imagining telling a story — into a continuous stream of text. The system developed by researchers at The University of Texas at Austin might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again.

The study, published in the journal Nature Neuroscience, was led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin. The work relies in part on a transformer model, similar to the ones that power Open AI’s ChatGPT and Google’s Bard.

Unlike other language decoding systems in development, this system does not require subjects to have surgical implants, making the process noninvasive. Participants also do not need to use only words from a prescribed list. Brain activity is measured using an fMRI scanner after extensive training of the decoder, in which the individual listens to hours of podcasts in the scanner. Later, provided that the participant is open to having their thoughts decoded, their listening to a new story or imagining telling a story allows the machine to generate corresponding text from brain activity alone.

“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” Huth said. “We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”

The result is not a word-for-word transcript. Instead, researchers designed it to capture the gist of what is being said or thought, albeit imperfectly. About half the time, when the decoder has been trained to monitor a participant’s brain activity, the machine produces text that closely (and sometimes precisely) matches the intended meanings of the original words.

For example, in experiments, a participant listening to a speaker say, “I don’t have my driver’s license yet” had their thoughts translated as, “She has not even started to learn to drive yet.” Listening to the words, “I didn’t know whether to scream, cry or run away. Instead, I said, ‘Leave me alone!’” was decoded as, “Started to scream and cry, and then she just said, ‘I told you to leave me alone.’”

Beginning with an earlier version of the paper that appeared as a preprint online, the researchers addressed questions about potential misuse of the technology. The paper describes how decoding worked only with cooperative participants who had participated willingly in training the decoder. Results for individuals on whom the decoder had not been trained were unintelligible, and if participants on whom the decoder had been trained later put up resistance — for example, by thinking other thoughts — results were similarly unusable.

“We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” Tang said. “We want to make sure people only use these types of technologies when they want to and that it helps them.”

In addition to having participants listen or think about stories, the researchers asked subjects to watch four short, silent videos while in the scanner. The semantic decoder was able to use their brain activity to accurately describe certain events from the videos.

The system currently is not practical for use outside of the laboratory because of its reliance on the time need on an fMRI machine. But the researchers think this work could transfer to other, more portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).

“fNIRS measures where there’s more or less blood flow in the brain at different points in time, which, it turns out, is exactly the same kind of signal that fMRI is measuring,” Huth said. “So, our exact kind of approach should translate to fNIRS,” although, he noted, the resolution with fNIRS would be lower.

Adaptive mechanisms facilitate robust performance in noise and in reverberation in an auditory categorization model

by Satyabrata Parida, Shi Tong Liu, Srivatsun Sadagopan in Communications Biology

In a paper published in Communications Biology, auditory neuroscientists at the University of Pittsburgh describe a machine learning model that helps explain how the brain recognizes the meaning of communication sounds, such as animal calls or spoken words.

The algorithm described in the study models how social animals, including marmoset monkeys and guinea pigs, use sound-processing networks in their brain to distinguish between sound categories — such as calls for mating, food or danger — and act on them.

The study is an important step toward understanding the intricacies and complexities of neuronal processing that underlies sound recognition. The insights from this work pave the way for understanding, and eventually treating, disorders that affect speech recognition, and improving hearing aids.

“More or less everyone we know will lose some of their hearing at some point in their lives, either as a result of aging or exposure to noise. Understanding the biology of sound recognition and finding ways to improve it is important,” said senior author and Pitt assistant professor of neurobiology Srivatsun Sadagopan, Ph.D. “But the process of vocal communication is fascinating in and of itself. The ways our brains interact with one another and can take ideas and convey them through sound is nothing short of magical.”

Humans and animals encounter an astounding diversity of sounds every day, from the cacophony of the jungle to the hum inside a busy restaurant. No matter the sound pollution in the world that surrounds us, humans and other animals are able to communicate and understand one another, including pitch of their voice or accent. When we hear the word “hello,” for example, we recognize its meaning regardless of whether it was said with an American or British accent, whether the speaker is a woman or a man, or if we’re in a quiet room or busy intersection.

Hierarchical structure of the computational model. The core model consisted of a dense spectrotemporal stage, a sparse feature detection stage, and a voting stage. a An acoustic stimulus was filtered by a cochlear filter bank to obtain a dense spectrotemporal representation of the stimulus, called a cochleagram. b The second stage had call-specific feature detectors (FDs) modeled as a spectrotemporal receptive field followed by a threshold. Template matching by cross-correlation (limited to the bandwidth of the FD) was used to obtain an FD Vm�� response, and the Vm�� response was compared with the threshold to obtain a binary output. Each call type had a set of most informative FDs, whose STRFs and thresholds were learned during model training. c Finally, in the voting stage, the weighted outputs of the FDs for a given call type were combined to form the final response of the model. In this study, in addition to these stages, we extended the model to include three mechanisms (boxes with dashed lines): (1) condition-specific training, (2) contrast gain control, and (3) top–down modulation of the excitability of FDs. ACx auditory cortex, Amp amplitude, CF Characteristic frequency, FD feature detector, STRF spectrotemporal receptive field.

The team started with the intuition that the way the human brain recognizes and captures the meaning of communication sounds may be similar to how it recognizes faces compared with other objects. Faces are highly diverse but have some common characteristics.

Instead of matching every face that we encounter to some perfect “template” face, our brain picks up on useful features, such as the eyes, nose and mouth, and their relative positions, and creates a mental map of these small characteristics that define a face.

In a series of studies, the team showed that communication sounds may also be made up of such small characteristics. The researchers first built a machine learning model of sound processing to recognize the different sounds made by social animals. To test if brain responses corresponded with the model, they recorded brain activity from guinea pigs listening to their kin’s communication sounds. Neurons in regions of the brain that are responsible for processing sounds lit up with a flurry of electrical activity when they heard a noise that had features present in specific types of these sounds, similar to the machine learning model.

They then wanted to check the performance of the model against the real-life behavior of the animals.

Guinea pigs were put in an enclosure and exposed to different categories of sounds — squeaks and grunts that are categorized as distinct sound signals. Researchers then trained the guinea pigs to walk over to different corners of the enclosure and receive fruit rewards depending on which category of sound was played.

Then, they made the tasks harder: To mimic the way humans recognize the meaning of words spoken by people with different accents, the researchers ran guinea pig calls through sound-altering software, speeding them up or slowing them down, raising or lowering their pitch, or adding noise and echoes.

Not only were the animals able to perform the task as consistently as if the calls they heard were unaltered, they continued to perform well despite artificial echoes or noise. Better yet, the machine learning model described their behavior (and the underlying activation of sound-processing neurons in the brain) perfectly.

As a next step, the researchers are translating the model’s accuracy from animals into human speech.

“From an engineering viewpoint, there are much better speech recognition models out there. What’s unique about our model is that we have a close correspondence with behavior and brain activity, giving us more insight into the biology. In the future, these insights can be used to help people with neurodevelopmental conditions or to help engineer better hearing aids,” said lead author Satyabrata Parida, Ph.D., postdoctoral fellow at Pitt’s department of neurobiology.

“A lot of people struggle with conditions that make it hard for them to recognize speech,” said Manaswini Kar, a student in the Sadagopan lab. “Understanding how a neurotypical brain recognizes words and makes sense of the auditory world around it will make it possible to understand and help those who struggle.”

Sleep Spindles Favor Emotion Regulation Over Memory Consolidation of Stressors in Posttraumatic Stress Disorder

by Nikhilesh Natraj, Thomas C. Neylan, Leslie M. Yack, Thomas J. Metzler, Steven H. Woodward, Samantha Q. Hubachek, Cassandra Dukes, Nikhila S. Udupa, Daniel H. Mathalon, Anne Richards in Biological Psychiatry: Cognitive Neuroscience and Neuroimaging

A new study shows that sleep spindles, brief bursts of brain activity occurring during one phase of sleep and captured by EEG, may regulate anxiety in people with post-traumatic stress disorder (PTSD).

The study shines a light on the role of spindles in alleviating anxiety in PTSD as well as confirms their established role in the transfer of new information to longer-term memory storage. The findings challenge recent work by other researchers that has indicated spindles may heighten intrusive and violent thoughts in people with PTSD.

“These findings may be meaningful not only for people with PTSD, but possibly for those with anxiety disorders,” said senior author Anne Richards, MD, MPH, of the UCSF Department of Psychiatry and Behavioral Sciences, the Weill Institute for Neurosciences and the San Francisco VA Medical Center. “There are non-invasive ways that might harness the benefits of this sleep stage to provide relief from symptoms,” she said.

The researchers enrolled 45 participants who had all experienced combat or noncombat trauma; approximately half had moderate symptoms of PTSD and the other half had milder symptoms or were asymptomatic. The researchers studied the spindles during non-rapid eye movement 2 (NREM2) sleep, the phase of sleep when they mainly occur, which comprises about 50% of total sleep.

In the study, participants attended a “stress visit” in which they were shown images of violent scenes, such as accidents, war violence, and human and animal injury or mutilation, prior to a lab-monitored nap that took place about two hours later.

Anxiety surveys were conducted immediately after exposure to the images as well as after the nap when recall of the images was tested. The researchers also compared anxiety levels in the stress visit to those in a control visit without exposure to these images.

The researchers found that spindle rate frequency was higher during the stress visit than during the control visit. “This provides compelling evidence that stress was a contributing factor in spindle-specific sleep rhythm changes,” said first author Nikhilesh Natraj, PhD, of the UCSF Department of Neurology, the Weill Institute for Neurosciences and the San Francisco VA Medical Center. Notably, in participants with greater PTSD symptoms, the increased spindle frequency after stress exposure reduced anxiety post-nap.

The naps in tthe study took place shortly after exposure to violent images — raising a question about whether sleep occurring days or weeks after trauma will have the same therapeutic effect. The researchers think this is likely, and point to interventions that could trigger the spindles associated with NREM2 sleep and benefit patients with stress and anxiety disorders.

Prescription drugs, like Ambien, are one option that should be studied further, “but a big question is whether the spindles induced by medications can also bring about the full set of brain processes associated with naturally occurring spindles,” said Richards.

Electrical brain stimulation is another area for more study, researchers said. “Transcranial electrical stimulation in which small currents are passed through the scalp to boost spindle rhythms or so-called targeted memory reactivation, which involves a cue, like an odor or sound used during an experimental session and replayed during sleep may also induce spindles,” said Natraj.

“In lieu of such inventions, sleep hygiene is definitely a zero-cost and easy way to ensure we are entering sleep phases in an appropriate fashion, thereby maximizing the benefit of spindles in the immediate aftermath of a stressful episode,” he said.

The researchers’ next project is to study the role of spindles in the consolidation and replay of intrusive and violent memories many weeks after trauma exposure.

Mice identify subgoal locations through an action-driven mapping process

by Philip Shamash, Sebastian Lee, Andrew M. Saxe, Tiago Branco in Neuron

Neuroscientists have uncovered how exploratory actions enable animals to learn their spatial environment more efficiently. Their findings could help build better AI agents that can learn faster and require less experience.

Researchers at the Sainsbury Wellcome Centre and Gatsby Computational Neuroscience Unit at UCL found the instinctual exploratory runs that animals carry out are not random. These purposeful actions allow mice to learn a map of the world efficiently. The study, published in Neuron, describes how neuroscientists tested their hypothesis that the specific exploratory actions that animals undertake, such as darting quickly towards objects, are important in helping them learn how to navigate their environment.

“There are a lot of theories in psychology about how performing certain actions facilitates learning. In this study, we tested whether simply observing obstacles in an environment was enough to learn about them, or if purposeful, sensory-guided actions help animals build a cognitive map of the world,” said Professor Tiago Branco, Group Leader at the Sainsbury Wellcome Centre and corresponding author on the paper.

In previous work, scientists at SWC observed a correlation between how well animals learn to go around an obstacle and the number of times they had run to the object. In this study, Philip Shamash, SWC PhD student and first author of the paper, carried out experiments to test the impact of preventing animals from performing exploratory runs. By expressing a light-activated protein called channelrhodopsin in one part of the motor cortex, Philip was able to use optogenetic tools to prevent animals from initiating exploratory runs towards obstacles.

The team found that even though mice had spent a lot of time observing and sniffing obstacles, if they were prevented in running towards them, they did not learn. This shows that the instinctive exploratory actions themselves are helping the animals learn a map of their environment.

To explore the algorithms that the brain might be using to learn, the team worked with Sebastian Lee, a PhD student in Andrew Saxe’s lab at SWC, to run different models of reinforcement learning that people have developed for artificial agents, and observe which one most closely reproduces the mouse behaviour.

Closed-loop optogenetic activation of M2 interrupts spontaneous edge-vector runs.

There are two main classes of reinforcement learning models: model-free and model-based. The team found that under some conditions mice act in a model-free way but under other conditions, they seem to have a model of the world. And so the researchers implemented an agent that can arbitrate between model-free and model-based. This is not necessarily how the mouse brain works, but it helped them to understand what is required in a learning algorithm to explain the behaviour.

“One of the problems with artificial intelligence is that agents need a lot of experience in order to learn something. They have to explore the environment thousands of times, whereas a real animal can learn an environment in less than ten minutes. We think this is in part because, unlike artificial agents, animals’ exploration is not random and instead focuses on salient objects. This kind of directed exploration makes the learning more efficient and so they need less experience to learn,” explain Professor Branco.

The next steps for the researchers are to explore the link between the execution of exploratory actions and the representation of subgoals. The team are now carrying out recordings in the brain to discover which areas are involved in representing subgoals and how the exploratory actions lead to the formation of the representations.

Surge of neurophysiological coupling and connectivity of gamma oscillations in the dying human brain

by Gang Xu, Temenuzhka Mihaylova, Duan Li, Fangyun Tian, Peter M. Farrehi, Jack M. Parent, George A. Mashour, Michael M. Wang, Jimo Borjigin in Proceedings of the National Academy of Sciences

Reports of near-death experiences — with tales of white light, visits from departed loved ones, hearing voices, among other attributes — capture our imagination and are deeply engrained in our cultural landscape.

The fact that these reports share so many common elements begs the question of whether there is something fundamentally real underpinning them — and that those who have managed to survive death are providing glimpses of a consciousness that does not completely disappear, even after the heart stops beating.

A new study published in the Proceedings of the National Academy of Science, provides early evidence of a surge of activity correlated with consciousness in the dying brain.

The study, led by Jimo Borjigin, Ph.D., associate professor in the Department of Molecular & Integrative Physiology and the Department of Neurology, and her team is a follow-up to animal studies conducted almost ten years ago in collaboration with George Mashour, M.D., Ph.D., the founding director of the Michigan Center for Consciousness Science.

Similar signatures of gamma activation were recorded in the dying brains of both animals and humans upon a loss of oxygen following cardiac arrest.

“How vivid experience can emerge from a dysfunctional brain during the process of dying is a neuroscientific paradox. Dr. Borjigin has led an important study that helps shed light on the underlying neurophysiologic mechanisms,” said Mashour.

The team identified four patients who passed away due to cardiac arrest in the hospital while under EEG monitoring. All four of the patients were comatose and unresponsive. They were ultimately determined to be beyond medical help and, with their families’ permission, removed from life support.

Upon removal of ventilator support, two of the patients showed an increase in heart rate along with a surge of gamma wave activity, considered the fastest brain activity and associated with consciousness.

Furthermore, the activity was detected in the so-called hot zone of neural correlates of consciousness in the brain, the junction between the temporal, parietal and occipital lobes in the back of the brain. This area has been correlated with dreaming, visual hallucinations in epilepsy, and altered states of consciousness in other brain studies.

Global hypoxia-induced surge of high-frequency oscillations in the brain of dying patients. (A) Absolute power of left anterior-mid temporal lobe (T3) before (S1) and after (S2 to S11) the withdrawal of ventilatory support in Pt1. The power spectrogram was presented in two separate parts (Bottom: 0 to 30 Hz; Top: 30 to 256 Hz) to highlight potentially unique features in slow waves (delta-beta). (B) Spatial and temporal dynamics of absolute power at baseline (S1) and at near-death stages (S2 to S11) in six frequency bands: delta (0 to 4 Hz), theta (4 to 8 Hz), alpha (8 to 13 Hz), beta (13 to 25 Hz), gamma1 (25 to 55 Hz), and gamma2 (80 to 150 Hz) in Pt1. © Temporal changes of beta, gamma1, and gamma2 power in 16 EEG loci (excluding the midline areas covered by Fz, Cz, and Pz electrodes) in Pt1. (D) Gamma power in four cortical regions (F7, F8, C3, and C4) in the four patients at baseline (S1) and after the termination of breathing support in S2.

These two patients had previous reports of seizures, but no seizures during the hour before their deaths, explained Nusha Mihaylova, M.D., Ph.D., a clinical associate professor in the Department of Neurology who has collaborated with Dr. Borjigin since 2015 by collecting EEG data from deceased patients under ICU care. The other two patients did not display the same increase in heartrate upon removal from life support nor did they have increased brain activity.

Because of the small sample size, the authors caution against making any global statements about the implications of the findings. They also note that it’s impossible to know in this study what the patients experienced because they did not survive.

“We are unable to make correlations of the observed neural signatures of consciousness with a corresponding experience in the same patients in this study. However, the observed findings are definitely exciting and provide a new framework for our understanding of covert consciousness in the dying humans,” she said.

Reimagining rumination? The unique role of mental imagery in adolescents’ affective and physiological response to rumination and distraction

by Hannah R. Lawrence, Greg J. Siegle, Rebecca A. Schwartz-Mette in Journal of Affective Disorders

For adolescents who may get stuck in negative thought spirals, refocusing on mental imagery is a more effective distraction than verbal thoughts, a recent study from Oregon State University found.

A short-term distraction can break up the thought spiral, which makes room for that person to then seek help from a therapist, friend or parent, said study author Hannah Lawrence, an assistant professor of psychology in OSU’s College of Liberal Arts.

“When we get stuck thinking about negative things that happened in the past, that makes us feel even worse, and it leads to more difficulties regulating our emotions and regulating our bodies,” Lawrence said. “We want to connect people to some more comprehensive strategies or skills that could get us unstuck from those thinking patterns.”

Lawrence runs the Translational Imagery, Depression and Suicide (TIDES) Lab at OSU, researching risk factors and developing effective interventions for depression in adolescents, including interventions that can be scaled up so they’re accessible to a wider population.

“These negative things are going to happen to all of us, so knowing ahead of time which tools we should pack in our toolbox that we can pull out to help lower our emotional reactions in the moment, just enough to get us out of those loops, will help us get unstuck,” she said.

The study, published in the Journal of Affective Disorders, aimed to determine which form of negative rumination — either verbal thoughts or imagery-based thoughts — caused a greater drop in the adolescent participants’ affect, or general mood; and also which form of thought was more effective at distracting them and helping them break out of that negative mood.

The 145 participants were ages 13 to 17 and recruited from a rural area of New England where Lawrence conducted the research study. The group was predominantly white and 62% female. Participants also filled out a depression questionnaire, which showed that about 39% of the group experienced clinically elevated symptoms of depression.

The researchers started by inducing a negative mood in the teenage participants, using an online game designed to create feelings of exclusion. (After participants completed the study, researchers explained the game to them to help alleviate any lingering hurt feelings.)

Participants were then split into groups and prompted to ruminate, either in verbal thoughts or mental imagery; or prompted to distract themselves, also in verbal thoughts or mental imagery. In the rumination group, participants were given prompts like “Imagine the kind of person you think you should be.” In the distraction group, prompts such as “Think about your grocery list” were meant to distract them from their negative affect.

To encourage verbal thoughts, researchers had participants practice coming up with sentences in their head describing a lemon using specific words. To encourage mental imagery, they had participants practice imagining what a lemon looked like in different conditions.

Researchers used noninvasive sensors to record electrical activity of the heart and skin conductance response as a way to measure physiological responses to the various prompts. They also directed participants to rate their current emotional affect at four different points during the study.

While there was no significant difference in the adolescents’ response between the two types of rumination — both verbal thoughts and mental imagery had a similar effect on their mood — researchers found that mental imagery was significantly more effective as a distraction than verbal thoughts.

“Using mental imagery seems to help us improve our affect, as well as regulate our nervous system,” Lawrence said. “The fact that we didn’t have a significant result for ruminating in imagery versus verbal thought tells us that it doesn’t really matter what form those negative cognitions take. The part that seems really problematic is the getting-stuck part — dwelling over and over again on these sad or anxiety-inducing things that happen.”

Researchers don’t know exactly why mental imagery is so effective, but they hypothesize it’s because imagery is much more immersive and requires more effort, thus creating a stronger emotional response and a bigger distraction. There’s also some evidence that imagining mental pictures lights up the same part of the brain as seeing and experiencing those things in real life, Lawrence said.

In her work, Lawrence has found some adults seem to ruminate in only one form, while most teens report ruminating in both verbal thoughts and mental imagery. One possibility is that these thought patterns become self-reinforcing habits, she said, with the negative images or verbal messages becoming more ingrained over time.

“That’s why I like working with teenagers: If we can interrupt these processes early in development, maybe we can help these teens get to adulthood and not get stuck in these negative thinking patterns,” Lawrence said. “All of us ruminate. It’s a matter of how long we do it for, and what skills we have to stop when we want to.”

Inhibitory input directs astrocyte morphogenesis through glial GABABR

by Yi-Ting Cheng, Estefania Luna-Figueroa, Junsung Woo, Hsiao-Chi Chen, Zhung-Fu Lee, Akdes Serin Harmanci, Benjamin Deneen in Nature

Researchers at Baylor College of Medicine have unraveled the processes that give astrocytes, the most abundant glial cell in the brain, their special bushy shape, which is fundamental for brain function. They report in the journal Nature that neuronal activity is necessary and sufficient for astrocytes to develop their complex shape, and interrupting this developmental process results in disrupted brain function.

“Astrocytes play diverse roles that are vital for proper brain function,” said first author Yi-Ting Cheng, a graduate student in Dr. Benjamin Deneen’s lab at Baylor. “For instance, they support the activity of other essential brain cells, neurons; participate in the formation and function of synapses, or neuron-to-neuron connections; release neurotransmitters, chemicals that mediate neuronal communication; and make the blood-brain barrier.”

In the adult brain, the bushy shape of astrocytes is fundamentally linked to effective brain function. The ends of the branched-out astrocyte structure interact with neurons and regulate synaptic activity.

“If astrocytes lose their structure, then synapses do not behave properly and brain function goes awry,” said Deneen, professor and Dr. Russell J. and Marian K. Blattner Chair in the Department of Neurosurgery and director of the Center for Cancer Neuroscience at Baylor. He also is the corresponding author of the work. “Figuring out how astrocytes acquire their complex, bushy structure is essential to understanding how the brain develops and functions and may bring new insight into how neurodevelopmental conditions emerge. In this study, we investigated the cells and processes that direct the development of astrocyte structure.”

When astrocytes develop, neurons are already present and active, so do neurons influence how astrocytes acquire their complex shape?

“We artificially activated or silenced neurons and determined whether this would speed up or slow down astrocyte maturation,” Cheng said. “We found that neuronal activity is both necessary and sufficient to drive full astrocyte maturation into a bushy-shaped cell.”

So how are astrocytes receiving the signals that direct them down to the proper maturation path? Through several experimental approaches the team discovered that neurons produce a neurotransmitter called GABA that binds to astrocytes via a molecule on their surface named GABAB receptor.

“We knocked out the GABAB receptor in astrocytes and activated the neurons. In this situation, the neurons did not promote the development of a typical astrocyte shape, supporting the idea that neurons communicate with astrocytes via the GABAB receptor to promote their maturation.”

“This finding was surprising and very interesting,” Deneen said. “Neurotransmitters such as GABA are known to signal between neurons at synapses, but we discovered that neurotransmitters also signal astrocytes, influencing their development by triggering changes in their structure.”

Other experiments revealed more pieces of the puzzle of how neurons lead astrocytes to develop their bushy shape.

“Neurons produce GABA, which binds to astrocytes via the GABAB receptor. In turn, this activeries of events, including triggering the expression of another receptor called Ednrb, which drives pathways that remodel cellular architecture inside the cells associated with cell shape,” Cheng said.

The researchers also investigated another mystery related to astrocyte development. They found that regulation of the expression of GABAB receptor in astrocytes does not occur in the same way in different brain regions.

“This result was totally unexpected,” Deneen said. “The GABAB receptor is universally required for astrocytes to develop their bushy shape in all brain regions. How is it regulated differently in different areas of the brain?”

Through bioinformatics analyses the researchers discovered that this regional regulation is conferred by two proteins, LHX2 in the brain cortex and NPAS3 in the olfactory bulb through their region-specific interactions with proteins SOX9 and NFIA, which are present in all astrocytes where they regulate GABAB receptor expression. In the cortex, LHX2 only binds to NFIA, while in the olfactory bulb NPS3 only binds to SOX9, enabling each one to regulate GABAB receptor expression in a specific brain region.

Altogether, the findings suggest that astrocyte development and function involve a complex pattern of events and proteins triggered by the activity of neurons and that operate in a region-specific manner.

Coordination of Locomotion by Serotonergic Neurons in the Predatory GastropodPleurobranchaea california

by Colin A. Lee, Jeffrey W. Brown, Rhanor Gillette in The Journal of Neuroscience

Hundreds of millions of years before the evolution of animals with segmented bodies, jointed skeletons or appendages, soft-bodied invertebrates like sea slugs ruled the seas. A new study finds parallels between the brain architecture that drives locomotion in sea slugs and that of more complex segmented creatures with jointed skeletons and appendages.

Reported in the Journal of Neuroscience, the study suggests that, rather than developing an entirely new set of neural circuits to govern the movement of segmented body parts, insects, crustaceans and even vertebrates like mammals adapted a network of neurons, a module, that guided locomotion and posture in much simpler organisms.

“Sea slugs may still have that module, a smallish network of neurons called the ‘A-cluster,’ with 23 neurons identified so far,” said University of Illinois Urbana-Champaign molecular and integrative physiology professor Rhanor Gillette, who led the new research.

“The question that we addressed in this study is whether the similarities we see between sea slugs and more complex creatures evolved independently or whether those with segmented body parts and appendages may have inherited their underlying neural circuitry from a soft-bodied bilateral common ancestor,” he said.

To answer that question, Gillette and his colleagues, former graduate students Colin Lee and Jeffrey Brown, videotaped sea slug movements and combined that data with recorded responses to the stimulation of nerves and specific neurons in the sea slug brain.

“The predatory sea slug we studied, Pleurobranchea californica, uses cilia on its foot to crawl, paddling through secreted mucus,” Gillette said. “For a postural turn toward or away from a stimulus, it simply shortens one side of its body and escapes from other predators with a frantic, rocking swim — all driven by the A-cluster.”

Previous studies from Gillette’s laboratory showed that Pleurobranchaea engages in cost-benefit calculations every time it encounters another creature in the wild. If it is very hungry, the neurons that control its attack and feeding behavior are at a heightened state of arousal and it will go after nearly anything that smells like food. Under other circumstances, it will do nothing or even actively avoid the stimulus.

“This is a good idea if it doesn’t need the food and can avoid other cannibalistic Pleurobranchaea attracted by it,” Gillete said. “All these behaviors involve how the A-cluster coordinates with action choices.”

In mammals, a special hindbrain module called the reticular system translates specific instructions for action choices from higher brain regions for posture and locomotion, Gillette said. This region then sends the motor commands down to the spinal cord for final transmission to the muscles.

“In particular, the reticular system relies on critical serotonin-producing neurons to control body movements in posture and locomotion,” he said. “In the new study, we find that similar serotonin-producing neurons in the A-cluster of sea slugs are driving behaviors like pursuit, avoidance and escape.

“In their relative simplicity, the sea slugs resemble in many ways the expected simpler ancestor of today’s complex animals,” Gillette said. “All the major circuit modules of action choice, translating that choice into motor commands, and motor pattern-generation found in the nervous systems of complex animals are also identifiable in the simpler soft-bodied sea slugs.”

The study offers the first evidence that the circuits driving locomotion in animals with complex bodies and behaviors “have close functional analogies in the simpler gastropod mollusks and may share a common inheritance,” Gillette said.

Integrating Nutrient Biomarkers, Cognitive Function, and Structural MRI Data to Build Multivariate Phenotypes of Healthy Aging

by Tanveer Talukdar, Christopher E. Zwilling, Aron K. Barbey in The Journal of Nutrition

In a new study, scientists explored the links between three measures known to independently predict healthy aging: nutrient intake, brain structure and cognitive function. Their analysis adds to the evidence that these factors jointly contribute to brain health in older adults.

Reported in the Journal of Nutrition, the study found that blood markers of two saturated fatty acids, along with certain omega-6, -7 and -9 fatty acids, correlated with better scores on tests of memory and with larger brain structures in the frontal, temporal, parietal and insular cortices. Watch a video about the research.

While other studies have found one-to-one associations between individual nutrients or classes of nutrients and specific brain regions or functions, very little research takes a comprehensive look at brain health, cognition and broad dietary patterns overall, said Aron Barbey, a professor of psychology, bioengineering and neuroscience at the University of Illinois Urbana-Champaign who led the study with postdoctoral researcher Tanveer Talukdar and psychology research scientist Chris Zwilling. The three co-authors all are affiliated with the Beckman Institute for Advanced Science and Technology at the U. of I.

“Our findings reveal that we can use nutrient biomarkers, cognitive tests and MRI measures of brain structure to account for much of the variation in healthy aging,” Barbey said. “This allows us to better understand how nutrition contributes to health, aging and disease,”

The researchers collected data from 111 healthy older adults with MRI structural scans, blood-based biomarkers of 52 dietary nutrients and cognitive performance on tests of memory and intelligence. By combining these measures using a data-fusion approach, the team found associations between dozens of features that appear to work in tandem to promote brain and cognitive health in older adults.

Data-fusion allows researchers to look across multiple data sets to map traits or features that have common patterns of variability, said Talukdar, who tailored this method to incorporate the nutrition, cognition and brain volumetric data.

“We’re looking at relationships among all of these together,” he said. “This allows us to identify certain features that cluster together.”

This overcomes some of the limitations of analyzing individual factors, Barbey said.

“If we just look at nutrition as it relates to brain structures and we don’t study cognition, or if we look at nutrition as it relates to cognition and we don’t study the brain, then we’re actually missing really important pieces of information.”

The most obvious features that clustered together in the new analysis involved the size of gray-matter volumes in the frontal, temporal and parietal cortices; performance on tests of auditory memory and short- and long-term memory; and blood markers related to consumption of monounsaturated and polyunsaturated fatty acids. Study participants who scored higher on the memory tests tended to have larger gray-matter volumes and higher levels of markers of omega-6, -7 and -9 fatty acids in their blood. Those who did more poorly on the cognitive tests also had smaller gray-matter volumes in those brain regions and lower levels of those dietary markers, the analysis revealed.

While the study only reveals associations between these factors and does not prove that dietary habits directly promote brain health, it adds to the evidence that nutrition is a key player in healthy aging, the researchers said.

“Our work motivates a more comprehensive picture of healthy aging,” Zwilling said. This gives insight into the importance of diet and nutrition and the value of data-fusion methods for studying their contributions to adult development and the neuroscience of aging.”

Comparative connectomics reveals noncanonical wiring for color vision in human foveal retina

by Yeon Jin Kim, Orin Packer, Andreas Pollreisz, Paul R. Martin, Ulrike Grünert, Dennis M. Dacey in Proceedings of the National Academy of Sciences

New findings in color vision research imply that humans can perceive a greater range of blue tones than monkeys do.

“Distinct connections found in the human retina may indicate recent evolutionary adaptations for sending enhanced color vision signals from the eye to the brain,” researchers report in the scientific journal, Proceedings of the National Academy of Sciences.

Yeon Jin Kim, acting instructor, and Dennis M. Dacey, professor, both in the Department of Biological Structure at the University of Washington School of Medicine in Seattle, led the international, collaborative project.

They were joined by Orin S. Packer of the Dacey lab; Andreas Pollreisz at the Medical University of Vienna, Austria; as well as Paul R. Martin, professor of experimental ophthalmology, and Ulrike Grünert, associate professor of ophthalmology and visual science, both at the University of Sydney, Australia, and the Save Sight Institute.

The scientists compared connections between color-transmitting nerve cells in the retinas of humans with those in two monkeys, the Old World macaque and the New World common marmoset. The ancestors of modern humans diverged from these two other primate species approximately 25 million years ago.

By using a fine-scale microscopic reconstruction method, the researchers wanted to determine of the neural wiring of the areas associated with color vision is conserved across these three species, despite each taking their own independent evolutionary pathways.

The scientists looked at the lightwave-detecting cone cells of the fovea of the retina. This small dimple is densely packed with cone cells. It is the part of the retina responsible for the sharp visual acuity needed to see important details, such as words on a page or what’s ahead while driving, and for color vision.

Cone cells come in three sensitivities: short, medium and long wavelengths. Information about color comes from neural circuits that process information across different cone types.

The researchers discovered that a certain short-wave or blue sensitive cone circuit found in humans is absent in marmosets. It is also different from the circuit seen in the macaque monkey. Other features the scientists found in the nerve cell connections in human color vision were not expected, based on earlier nonhuman primate color vision models.

A better understanding of the species-specific, complex neural circuitry that codes for color perception could eventually help explain the origins of the color vision qualities that are distinct to humans.

The researchers also mentioned the possibility that differences among mammals in their visual circuitry could have been at least partially shaped by their behavioral adaptation to ecological niches. Marmosets live in trees whereas humans prefer to dwell on land. The ability to spot ripe fruit among the shifting light of a forest, for example, may have offered a selective advantage for particular color visual circuity. However, actual effects of environment and behavior on color vision circuitry have not yet been established.

More generally, comparative studies of neural circuits at the level of connections and signaling between nerve cells, the researchers noted, could help answer many other questions. These include elucidating the underlying logic of neural circuit design and providing insight into how evolution has modified the nervous system to help shape perception and behavior.

Identification of the S-cone pathway in marmoset retina. (A) Volume reconstructions of a cluster of 6 cone pedicles in parafoveal retina (~300 µm eccentricity). The view is toward the outer retina directed at the pedicle synaptic face. One of the 6 pedicles shown (S cone 7, purple, white asterisk, at center of the pedicle cluster) is notably smaller in diameter than the surrounding cones (LM cones, yellow to brown). The LM cone pedicles in the cluster were interconnected by a few, fine-diameter telodendria (arrows; see also SI Appendix, Fig. S2 A–D). (B) Single-layer SEM image shows a section through the S cone pedicle marked with the white asterisk in A. All central elements (located opposite to synaptic ribbons, red arrowheads) arose from a morphologically distinct BB cell; flat and invaginating midget bipolar connections were lacking. © Complete reconstruction of a single Blue cone bipolar (BB) circuit. The BB cell (medium blue) densely contacts the S cone and projects an axon to the inner border of the IPL where it makes predominant synaptic output to a small bistratified ganglion cell type (SBGC, purple). (D) Rotated and zoomed inset shows that S cone 7 was targeted by two additional BB cells (BB 2 and BB 3). BBs 1 to 3 accounted for all central elements at the S cone pedicle (synaptic ribbon positions indicated by the red balls). (E) Rotated and zoomed view of the BB 1 to SBGC projection shows the intimate enwrapping of the BB 1 cell axon terminal with the SBGC inner dendrites.
Identification of the S-cone, blue cone bipolar synaptic pathway in human retina with mixed cone-type connectivity. (A) Volume reconstructions of a cluster of 5 cone pedicles in parafoveal retina (~500 µm retinal eccentricity). The view is toward the outer retina directed at the pedicle synaptic face. Two of the 5 pedicles shown were identified as S cones (S cones 5 and 92, purple) by their postsynaptic connectivity. All cone pedicles (LM cones, yellow to brown), including the S cones, were linked to their neighbors by distinctive telodendritic contacts (arrows, red arrowheads) extending from the pedicle base. The cone axons extend vertically downward beyond this image. (B, Left) a section through the S cone marked with the red asterisk in A illustrates a blue cone bipolar (BB) dendrite terminating as a central element (red arrowhead points to synaptic ribbon). The flat midget bipolar (FMB, teal) is also shown along with part of its Landolt’s club (white arrowhead). A putative site of S–LM cone gap junction is indicated by the open white arrowhead. The complete reconstruction of this blue cone bipolar cell is shown in (C–E). (B, Right) example of the same BB cell (blue) as in the Left panel forming an invaginating central element at a synaptic triad (arrowhead) of LM cone 24; a comparable central element arising from an invaginating midget bipolar (asterisk) is also indicated. © The BB cell (light blue) densely contacts the S cone and projects an axon to the inner border of the IPL where it makes predominant synaptic output to a small bistratified ganglion cell type (SBGC, purple). (D) A rotated and zoomed view of panel © (Top) shows that this BB (BB 1) cell not only makes dense contact with S cone pedicle 5 but also makes sparse invaginating contacts (red arrowheads) with neighboring LM cone pedicles 24 and 3. (E) A rotated and zoomed view of the BB to SBGC projection shows the intimate enwrapping of the BB cell axon terminal with the SBGC inner dendrites.

Subscribe to Paradigm!

Medium, Twitter, Telegram, Telegram Chat, LinkedIn, and Reddit.

--

--