NS/ How does our brain make a coherent image?
Neuroscience biweekly vol. 82, 29th March — 12th April
TL;DR
- When we look at something, the different properties of the image are processed in different brain regions. But how does our brain make a coherent image out of such a fragmented representation? New research sheds light on two existing hypotheses in the field. When we open our eyes, we immediately see what is there. The efficiency of our vision is a remarkable achievement of evolution. The introspective ease with which we perceive our visual surroundings masks the sophisticated machinery in our brain that supports visual perception. The image we see is rapidly analyzed by a complex hierarchy of cortical and subcortical brain regions.
- Intelligence is partly heritable. There are studies that show that certain genetic variations are linked to better performance in intelligence tests. Other studies show that a variety of brain characteristics, such as network efficiency, are related to intelligence. For the first time, researchers have now studied all three parameters — genes, different brain characteristics and behavior — simultaneously. Using gene analyses, magnetic resonance imaging and intelligence tests, the team demonstrated which brain characteristics form the link between genes and behavior.
- A very subtle and seemingly random type of eye movement called ocular drift can be influenced by prior knowledge of the expected visual target, suggesting a surprising level of cognitive control over the eyes, according to a study led by Weill Cornell Medicine neuroscientists.
- Is our brain able to regenerate? And can we harness this regenerative potential during aging or in neurodegenerative conditions? These questions sparked intense controversy within the field of neuroscience for many years. A new study from the Netherlands Institute for Neuroscience shows why there are conflicting results and proposes a roadmap on how to solve these issues.
- New research at the University of Massachusetts Amherst zeroes in on the root cause of adverse health effects from disruption of the body’s circadian rhythms, which typically occurs from jet lag and rotating work shifts.
- A new mouse study has identified a gene-enzyme interaction that appears to play a key role in how the brain forms memories. The findings provide insights into how PDE inhibitor medications may help diseases like Alzheimer’s.
- Researchers report that the flow of cerebrospinal fluid in the brain is linked to waking brain activity. The study demonstrates that manipulating blood flow in the brain with visual stimulation induces complementary fluid flow. The findings could impact treatment for conditions like Alzheimer’s disease, which have been associated with declines in cerebrospinal fluid flow.
- Using artificial intelligence, researchers have discovered how to screen for genetic mutations in cancerous brain tumors in under 90 seconds — and possibly streamline the diagnosis and treatment of gliomas, a study suggests. The newly developed system, DeepGlioma, identified mutations used by the World Health Organization to define molecular subgroups of diffuse glioma with an average accuracy of over 90%.
- A common amino acid, glycine, can deliver a “slow-down” signal to the brain, likely contributing to major depression, anxiety, and other mood disorders in some people, scientists at the Wertheim UF Scripps Institute for Biomedical Innovation & Technology have found.
- Can an individual’s social status have an impact on their level of stress? Researchers at Tulane University put that question to the test and believe that social rank, particularly in females, does indeed affect the stress response.
- And more!
Neuroscience market
The global neuroscience market size was valued at USD 28.4 billion in 2016 and it is expected to reach USD 38.9 billion by 2027.
The latest news and research
Solving the binding problem: Assemblies form when neurons enhance their firing rate — they don’t need to oscillate or synchronize
by Roelfsema PR. in Neuron
Neurons in low level brain regions extract basic features such as line orientation, depth and the color of local image elements. They send the information to several mid-level brain areas. Neurons in these areas code for other features, such as motion direction, color and shape fragments. Neurons in mid-level areas send the information to yet higher levels for an even more abstract analysis of the visual scene. Neurons at these higher levels code for the category of objects and even for the identity of specific individuals. Hence, every visual object activates a complex representation that is carried by a large set of neurons across many brain regions.
An important question is how the distributed and fragmented representations of objects across many areas of the visual brain can lead to a unified perception of objects against a background. This review focusses on this so-called “binding problem”. The binding problem lures if there are multiple objects. Each of the objects activates a pattern of neurons across many brain regions and in such a representation it may not evident which features belong to one of the objects and which ones belong to the others. Which process glues the features into coherent object representations?
Pieter Roelfsema: ‘When we process visual information, our cells only look at a small section of the overall picture. You get a palette of cells all focusing on different fragments. There is not one cell where this information comes together. It was previously thought that synchronization of the cells was important to solve the binding problem. It was thought that cells coding features of the same object synchronize their activity in a rhythm. The cells coding for features of another object would then synchronize in a different rhythm. Many scientists have invested time and energy into this theory, but we now know that it works differently.’
‘It turns out that we focus our attention on one object at a time. These neurons that code for features of the attended object do not need to synchronize, but their activity increases. It is possible to observe several objects at the same time, but determining which properties belong to one of the objects requires attention. When there are multiple objects on the table, we are often not actively concerned with which properties belong to which object. However, when we want to grasp one of the things, we direct our attention on that object and only then the grouping of image properties becomes relevant.’
‘The binding problem is therefore not solved by synchronization, but by increased firing of the cells. Many scientists still believe in the synchronization theory, while we have known for years that it is incorrect. This new review lists the evidence for and against the two binding theories.”
Structural architecture and brain network efficiency link polygenic scores to intelligence
by Erhan Genç, Dorothea Metzen, Christoph Fraenz, Caroline Schlüter, Manuel C. Voelkle, Larissa Arning, Fabian Streit, Huu Phuc Nguyen, Onur Güntürkün, Sebastian Ocklenburg, Robert Kumsta in Human Brain Mapping
Intelligence is partly heritable. There are studies that show that certain genetic variations are linked to better performance in intelligence tests. Other studies show that a variety of brain characteristics, such as network efficiency, are related to intelligence. For the first time, researchers have now studied all three parameters — genes, different brain characteristics and behaviour — simultaneously. Using gene analyses, magnetic resonance imaging and intelligence tests, the team demonstrated which brain characteristics form the link between genes and behaviour.
The results are described by a team around Dorothea Metzen from the Department of Biopsychology at Ruhr University Bochum, Germany, and Dr. Erhan Genç, formerly at Ruhr University, now at the Leibniz Research Centre for Working Environment and Human Factors in Dortmund (IfADo).
In addition to the IfADo and various institutions of Ruhr University, the Humboldt-Universität Berlin, the Central Institute of Mental Health in Mannheim, the Medical School Hamburg and the University of Luxembourg were involved. At Ruhr University, the biopsychology, human genetics and genetic psychology teams cooperated.
The team included 557 subjects aged between 18 and 75 years in the study. Using saliva samples, they analyzed which individuals possessed how many gene variations associated with high intelligence.
“There are thousands of genes that contribute to intelligence,” explains Dorothea Metzen. “We calculated a summary score for each person that reflects the genetic predisposition for high intelligence.”
In addition, all subjects took part in brain scans, which the researchers used to determine not only the thickness and surface area of the cerebral cortex, but also how efficiently the structural and functional networks in the brain are organized. All participants also completed an intelligence test.
“The broadness and detailed recording of various data in this study is, as far as I am aware, unprecedented,” emphasises Erhan Genç. “For the first time, we looked at the triad of genes, different brain characteristics and behavioural traits as a whole.”
Specifically, the group analysed which differences in genetic variations are related to differences in brain characteristics and differences in behaviour.
When the team only looked at the connection between genetic variations and brain characteristics — that is, disregarding intelligence test results — they found numerous associations in many regions distributed across the entire brain. Significantly fewer associations were apparent when the researchers investigated which brain characteristics were associated with intelligence test performance. When they considered all three parameters at once — genes, brain characteristics and intelligence test performance — an association was only found in few brain areas in the frontal, parietal and visual cortex. This means that there are only specific areas in the brain where gene variations influence brain characteristics, and these characteristics simultaneously affect intelligence. The decisive brain characteristics were the size of the brain surface and the efficiency of structural connectivity. The researchers found very few such connections between genes, brain and behaviour when they examined the thickness of the cerebral cortex and the efficiency of functional connectivity.
With their study, the researchers hope to have proposed a method that can also be transferred to other areas. This is because it allows the interplay of genes, brain and behaviour to be studied not only for intelligence, but also for other traits.
“It would also be interesting if such methods were used in the future with larger cohorts of thousands or tens of thousands of test subjects,” says Erhan Genç, because that would improve the quality of the results. “Studying the impact of age would also be an interesting future research project,” adds Genç.
Cognitive influences on fixational eye movements
by Yen-Chu Lin, Janis Intoy, Ashley M. Clark, Michele Rucci, Jonathan D. Victor in Current Biology
A very subtle and seemingly random type of eye movement called ocular drift can be influenced by prior knowledge of the expected visual target, suggesting a surprising level of cognitive control over the eyes, according to a study led by Weill Cornell Medicine neuroscientists.
The discovery adds to the scientific understanding of how vision — far from being a mere absorption of incoming signals from the retina — is controlled and directed by cognitive processes.
“These eye movements are so tiny that we’re not even conscious of them, and yet our brains somehow can use the knowledge of the visual task to control them,” says study lead author Dr. Yen-Chu Lin, who carried out the work as a Fred Plum Fellow in Systems Neurology and Neuroscience in the Feil Family Brain and Mind Research Institute at Weill Cornell Medicine.
Dr. Lin works in the laboratory of study senior author Dr. Jonathan Victor, the Fred Plum Professor of Neurology at Weill Cornell Medicine.
The study involved close collaboration with the laboratory of Dr. Michele Rucci, professor of brain and cognitive sciences and neuroscience at the University of Rochester.
Neuroscientists have known for decades that information stored in memory can strongly shape the processing of sensory inputs, including the streams of visual data coming from the eyes. In other words, what we see is influenced by what we expect to see or the requirements of the task at hand.
Most studies of cognitive control over eye movement have covered more obvious movements, such as the “saccade” movements in which the eyes dart across large parts of the visual field. In the new study, Drs. Lin and Victor and their colleagues examined ocular drift, tiny jitters of the eye that occur even when gaze seems fixed. Ocular drifts are subtle motions that shift a visual target on the retina by distances on the order of a fraction of a millimeter or so — across just a few dozen photoreceptors (cones). They are thought to improve detection of small, stationary details in a visual scene by scanning across them, effectively converting spatial details into trains of visual signals in time.
Prior studies had suggested that ocular drift and other small-scale “fixational eye movements” are under cognitive control only in a broad sense — for example, slowing when scanning across more finely detailed scenes. In the new study, the researchers found evidence for a more precise type of control.
Using sensitive equipment in Dr. Rucci’s laboratory, the researchers recorded ocular drifts in six volunteers who were asked to identify which of a pair of letters (H vs. N, or E vs. F) was being shown to them on a background of random visual noise. Based on computational modeling, the scientists expected that optimal eye movements for discriminating between letters would cross the key elements distinguishing the letters at right angles. Thus, they hypothesized that a more precise cognitive control, if it existed, would tend to direct ocular drift in both vertical and oblique (lower left to upper right) directions for the H vs. N discrimination, compared to more strictly vertical movements for the E vs. F discrimination.
They found that the subjects’ eye movements did indeed tend to follow these patterns — even in the 20 percent of trials in which the subjects, though expecting to see a letter, were shown only noise. The latter result showed that the cognitive control of ocular drift could be driven solely by specific prior knowledge of the visual task, independently of any incoming visual information.
“These results underscore the interrelationship between the sensory and the motor parts of vision — one really can’t view them separately,” said Dr. Victor, who is also a professor of neuroscience in the Feil Family Brain and Mind Research Institute at Weill Cornell.
He noted that the direction of fine eye movements is thought to come from neurons in the brainstem, whereas the task knowledge presumably resides in the upper brain: the cortex — implying some kind of non-conscious connection between them.
“The subjects are aware of the tasks they have to do, yet they don’t know that their eyes are executing these tiny movements, even when you tell them,” Dr. Victor said.
Studies of this pathway, he added, could lead to better insights not only into the neuroscience of vision, but possibly also visual disorders — which traditionally have been seen as disorders of the retina or sensory processing within the brain.
“What our findings suggest is that visual disorders may sometimes have a motor component too, since optimal vision depends on the brain’s ability to execute these very tiny movements,” Dr. Victor said.
Mapping human adult hippocampal neurogenesis with single-cell transcriptomics: Reconciling controversy or fueling the debate?
by Giorgia Tosoni, Dilara Ayyildiz, Julien Bryois, Will Macnair, Carlos P. Fitzsimons, Paul J. Lucassen, Evgenia Salta in Neuron
Is our brain able to regenerate? And can we harness this regenerative potential during aging or in neurodegenerative conditions? These questions sparked intense controversy within the field of neuroscience for many years. A new study from the Netherlands Institute for Neuroscience shows why there are conflicting results and proposes a roadmap on how to solve these issues.
The notion of exploiting the regenerative potential of the human brain in aging or neurological diseases represents a particularly attractive alternative to conventional strategies for enhancing or restoring brain function, especially given the current lack of effective therapeutic strategies in neurodegenerative disorders like Alzheimer’s disease. The question of whether the human brain does possess the ability to regenerate or not has been at the center of a fierce scientific debate for many years and recent studies yielded conflicting results. A new study from Giorgia Tosoni and Dilara Ayyildiz, under the supervision of Evgenia Salta in the laboratory of Neurogenesis and Neurodegeneration, critically discusses and re-analyzes previously published datasets. How is it possible that we haven’t yet found a clear answer to this mystery?
Previous studies in which dividing cells were labeled in postmortem human brain, showed that new cells can indeed arise throughout adulthood in the hippocampus of our brain, a structure that plays an important role in learning and memory, and is also severely affected in Alzheimer’s disease. However, other studies contradict these results and cannot detect the generation of new brain cells in this area. Both conceptual and methodological confounders have likely contributed to these seemingly opposing observations. Hence, elucidating the extent of regeneration in the human brain remains a challenge.
Recent advances in single-cell transcriptomics technologies have provided valuable insights into the different cell types found in human brains from deceased donors with different brain diseases. To date, single-cell transcriptomic technologies have been used to characterize rare cell populations in the human brain. In addition to identifying specific cell types, single-nucleus RNA sequencing can also explore specific gene expression profiles to unravel full the complexity of the cells in the hippocampus.
The advent of single-cell transcriptomics technologies was initially viewed as a panacea to resolving the controversy in the field. However, recent single-cell RNA sequencing studies in human hippocampus yielded conflicting results. Two studies indeed identified neural stem cells, while a third study failed to detect any neurogenic populations. Are these novel approaches — once again — failing to finally settle the controversy regarding the existence of hippocampal regeneration in humans? Will we eventually be able to overcome the conceptual and technical challenges and reconcile these -seemingly- opposing views and findings?
In this study, the researchers critically discussed and re-analyzed previously published single-cell transcriptomics datasets. They caution that the design, analysis and interpretation of these studies in the adult human hippocampus can be confounded by specific issues, which ask for conceptual, methodological and computational adjustments. By re-analyzing previously published datasets, a series of specific challenges were probed that require particular attention and would greatly profit from an open discussion in the field.
Giorgia Tosoni:
‘We analyzed previously published single-cell transcriptomic studies and performed a meta-analysis to assess whether adult neurogenic populations can reliably be identified across different species, especially when comparing mice and humans. The neurogenic process in adult mice is very well characterized and the profiles of the different cellular populations involved are known. These are actually the same molecular and cellular signatures that have been widely used in the field to also identify neurogenic cells in the human brain. However, due to several evolutionary adaptations, we would expect the neurogenesis between mice and humans to be different. We checked the markers for every neurogenic cell type and looked at the amount of marker overlap between the two species.’
‘We found very little, if no, overlap between the two, which suggests that the mouse-inferred markers we have been long using may not be suitable for the human brain. We also discovered that such studies require enough statistical power: if regeneration of neuronal cells does happen in the adult human brain, we expect it to be quite rare. Therefore, enough cells would need to be sequenced in order to identify those scarce, presumably neurogenic populations. Other parameters are also important, for example the quality of the samples. The interval between the death of the donor and the downstream processing is critical, since the quality of the tissue and of the resulting data drops over time.’
Dilara Ayyildiz:
‘These novel technologies, when appropriately applied, offer a unique opportunity to map hippocampal regeneration in the human brain and explore which cell types and states may be possibly most amenable to therapeutic interventions in aging, neurodegenerative and neuropsychiatric diseases. However, reproducibility and consistency are key. While doing the analysis we realized that some seemingly small, but otherwise very critical details and parameters in the experimental and computational pipeline, can have a big impact on the results, and hence affect the interpretation of the data.’
‘Accurate reporting is essential for making these single-cell transcriptomics experiments and their analysis reproducible. Once we re-analyzed these previous studies applying common computational pipelines and criteria, we realized that the apparent controversy in the field may in reality be misleading: with our work we propose that there may actually be more that we agree on than previously believed.’
Adult Neurogenesis Is Altered by Circadian Phase Shifts and the Duper Mutation in Female Syrian Hamsters
by Michael Seifu Bahiru, Eric L. Bittman in eneuro
New research at the University of Massachusetts Amherst zeroes in on the root cause of adverse health effects from disruption of the body’s circadian rhythms, which typically occurs from jet lag and rotating work shifts.
The research, published in the journal eNeuro, also shows that the circadian clock gene Cryptochrome 1 (Cry 1) regulates adult neurogenesis — the ongoing formation of neurons in the brain’s hippocampus. Adult neurogenesis supports learning and memory, and its disruption has been linked to dementia and mental illness.
“Circadian disruption impacts a lot of things,” says lead author Michael Seifu Bahiru, a Ph.D. candidate in the lab of Eric Bittman, Professor Emeritus of Biology. “There are links to cancer, diabetes and hypertension, as well as adverse impacts on neurogenesis.”
Cell birth and survival in the adult hippocampus are regulated by a circadian clock, so its disruption may throw off the process of neurogenesis. In the U.S. alone, some 30 million people experience phase shifts in their circadian rhythms as they work rotating schedules.
Until recently, researchers have faced a sort of chicken-or-egg question.
“We always wondered what actually is the root cause of the ailments from circadian disruption?” Bahiru says. “Does the problem come from the act of shifting or the shift itself?”
Bittman explains further, “It’s possible it’s just changing the light cycle that affects neurogenesis, that jerking your clock around is bad for you, as opposed to the jet lag, which is the time delay that it takes for all circadian-dependent systems in your body to adjust to this change in daylight.”
Their findings support the hypothesis that it’s this internal misalignment, this state of desynchrony between and within organs that occurs during jet lag, that is responsible for the adverse impact on neurogenesis — and, they suspect, other adverse health effects from circadian disruption.
To test their hypothesis, they studied cell birth and differentiation in Syrian hamsters with a recessive mutation in the Cry 1 gene that speeds up the clock in constant conditions and dramatically accelerates its ability to shift in response to light. Bittman named the mutation, discovered in previous research, duper. The research team also tested a control group of hamsters without the duper mutation. Both underwent the same sequence of changes in the light cycle.
They simulated jet lag in the form of eight-hour advances and delays at eight 16-day intervals. A cell birth marker was given in the middle of the experiment. Results showed that jet lag has little effect on cell birth but steers the fate of newborn cells away from becoming neurons. Dupers are immune to this effect of phase shifts.
“As predicted, the duper animals re-entrained quicker, but also were resistant to the negative effects of the jet lag protocol, whereas the control — the wild type hamsters — had reduced neurogenesis,” Bahiju says.
“The findings indicate that circadian misalignment is critical in jet lag,” the paper concludes.
The ultimate goal of Bittman’s lab is to advance understanding of the pathways involved in human biological clocks, which could lead to the prevention of or treatment for the effects of jet lag, shift work and circadian rhythm disorders. This latest research is the next step toward that goal.
Now the team will turn to “a big unanswered question,” Bittman says — “whether it’s the operation of circadian clocks in the hippocampus that is being directly regulated by shifts of the light:dark cycle, or whether neurogenesis is controlled by biological clocks running in cells elsewhere in the body.”
Another possibility, which Bittman thinks is more likely, is that the master pacemaker in the suprachiasmatic nucleus of the hypothalamus in the brain detects the light shift and then relays it to the stem cell population that has to divide and differentiate in the hippocampus.
Arrestin-dependent nuclear export of phosphodiesterase 4D promotes GPCR-induced nuclear cAMP signaling required for learning and memory
by Joseph M. Martinez, Ao Shen, Bing Xu, Aleksandra Jovanovic, Josephine de Chabot, Jin Zhang, Yang K. Xiang in Science Signaling
The process by which memories are formed in the hippocampus region of the brain is complex. It relies on a precise choreography of interactions between neurons, neurotransmitters, receptors and enzymes.
A new mouse study led by researchers at the UC Davis School of Medicine has identified an intricate molecular process involving gene expression in the neurons that appears to play a critical role in memory consolidation. The research was published in Science Signaling.
“This is an exciting mechanism. It shows that an enzyme like phosphodiesterase is key in controlling gene expression necessary for memory consolidation,” said Yang K. Xiang, a professor in the Department of Pharmacology and senior author of the paper.
Xiang’s research focuses on understanding how dysregulation or impairment of cellular and molecular mechanisms in the heart and brain can lead to diseases like heart failure and Alzheimer’s.
The new study focuses on the central adrenergic system. The ability to pay attention, which is essential in learning and memory, is controlled by the central adrenergic system in the brain.
To understand the components critical for memory, the researchers looked at beta-2 adrenergic receptors. The receptors are present in different cell types throughout the body. They are also found on nerve cells in the hippocampal region of the brain.
The researchers show that when beta-2 adrenergic receptors are activated — through a series of molecular steps known as a signaling pathway — they stimulate the nucleus of the neuron to export an enzyme, phosphodiesterase 4D5 (PDE4D5).
Previous studies have identified PDE4D5 as having a role in promoting learning and memory.
A crucial step to stimulating this memory-related gene expression — the export of PDE4D5 — appears to be the attachment of a phosphate group (known as phosphorylation) to the receptor. This is accomplished by an enzyme known as a kinase.
The kinase involved in this case is a G-protein receptor kinase.
The researchers used genetically altered mice to test whether phosphorylation of the beta-2 adrenergic receptors by G-protein receptor kinase was necessary for gene expression — the export of the PDE4D5 enzyme.
The mice lacked a phosphorylation site on their beta-2 adrenergic receptors, meaning their neurons could not follow the normal signaling pathway when the receptors were activated.
The researchers found that, as expected, these genetically altered mice exhibited poor memory related to space and location. This is the same memory pathway that is disrupted during the early stages of Alzheimer’s disease.
However, when they provided the memory-impaired mice with a drug known as a PDE4 inhibitor (comparable to the PDE4D5 enzyme that would normally be exported), the mice’s ability to learn and retain memories was increased.
“The gene expression forms the material foundation of the memory in your brain. If you don’t have gene expression, you won’t have memory,” Xiang explained.
The use of PDE inhibitors is being explored for Alzheimer’s disease. Studies of the PDE5 inhibitor sildenafil, known as Viagra, have had mixed results. A 2021 NIH study found Viagra was associated with a reduced risk of Alzheimer’s disease, but a later study found Viagra was not associated with lower Alzheimer’s risk.
“We need to understand what is causing impairment in diseases like Alzheimer’s so we can find interventions that allow patients to regain ability or slow down the disease progression,” said Xiang. “This study highlights the potential of PDE inhibitors in rescuing memory in Alzheimer’s patients.”
Neural activity induced by sensory stimulation can drive large-scale cerebrospinal fluid flow during wakefulness in humans
by Stephanie D. Williams, Beverly Setzer, Nina E. Fultz, Zenia Valdiviezo, Nicole Tacugue, Zachary Diamandis, Laura D. Lewis in PLOS Biology
Researchers at Boston University, USA report that the flow of cerebrospinal fluid in the brain is linked to waking brain activity. Led by Stephanie Williams, and publishing in the open access journal PLOS Biology, the study demonstrates that manipulating blood flow in the brain with visual stimulation induces complementary fluid flow. The findings could impact treatment for conditions like Alzheimer’s disease, which have been associated with declines in cerebrospinal fluid flow.
Just as our kidneys help remove toxic waste from our bodies, cerebrospinal fluid helps remove toxins from the brain, particularly while we sleep. Reduced flow of cerebrospinal fluid is known to be related to declines in brain health, such as occur in Alzheimer’s disease. Based on evidence from sleep studies, the researchers hypothesized that brain activity while awake could also affect the flow of cerebrospinal fluid. They tested this hypothesis by simultaneously recording human brain activity via fMRI and the speed of cerebrospinal fluid flow while people were shown a checkered pattern that turned on and off.
Researchers first confirmed that the checkered pattern induced brain activity; blood oxygenation recorded by fMRI increased when the pattern was visible and decreased when it was turned off. Next, they found that the flow of cerebrospinal fluid negatively mirrored the blood signal, increasing when the checkered pattern was off. Further tests showed that changing how long the pattern was visible affected blood and fluid in a predictable way, and that the blood-cerebrospinal fluid link could not be accounted for by only breathing or heart rate rhythms.
Although the study did not measure waste clearance from the brain, it establishes that simple exposure to a flashing pattern can increase the flow of cerebrospinal fluid, which could be a way to combat natural or unnatural declines in fluid flow that occur with age or disease.
Laura Lewis, senior author of the study, adds, “This study discovered that we can induce large changes in cerebrospinal fluid flow in the awake human brain, by showing images with specific patterns. This result identifies a noninvasive way to modulate fluid flow in humans.”
Artificial-intelligence-based molecular classification of diffuse gliomas using rapid, label-free optical imaging
by Todd Hollon, Cheng Jiang, Asadur Chowdury, Mustafa Nasir-Moin, Akhil Kondepudi, Alexander Aabedi, Arjun Adapa, Wajd Al-Holou, Jason Heth, Oren Sagher, Pedro Lowenstein, Maria Castro, Lisa Irina Wadiura, Georg Widhalm, Volker Neuschmelting, David Reinecke, Niklas von Spreckelsen, Mitchel S. Berger, Shawn L. Hervey-Jumper, John G. Golfinos, Matija Snuderl, Sandra Camelo-Piragua, Christian Freudiger, Honglak Lee, Daniel A. Orringer in Nature Medicine
Using artificial intelligence, researchers have discovered how to screen for genetic mutations in cancerous brain tumors in under 90 seconds — and possibly streamline the diagnosis and treatment of gliomas, a study suggests.
A team of neurosurgeons and engineers at Michigan Medicine, in collaboration with investigators from New York University, University of California, San Francisco and others, developed an AI-based diagnostic screening system called DeepGlioma that uses rapid imaging to analyze tumor specimens taken during an operation and detect genetic mutations more rapidly.
In a study of more than 150 patients with diffuse glioma, the most common and deadly primary brain tumor, the newly developed system identified mutations used by the World Health Organization to define molecular subgroups of the condition with an average accuracy over 90%.
“This AI-based tool has the potential to improve the access and speed of diagnosis and care of patients with deadly brain tumors,” said lead author and creator of DeepGlioma Todd Hollon, M.D., a neurosurgeon at University of Michigan Health and assistant professor of neurosurgery at U-M Medical School.
Molecular classification is increasingly central to the diagnosis and treatment of gliomas, as the benefits and risks of surgery vary among brain tumor patients depending on their genetic makeup. In fact, patients with a specific type of diffuse glioma called astrocytomas can gain an average of five years with complete tumor removal compared to other diffuse glioma subtypes.
However, access to molecular testing for diffuse glioma is limited and not uniformly available at centers that treat patients with brain tumors. When it is available, Hollon says, the turnaround time for results can take days, even weeks.
“Barriers to molecular diagnosis can result in suboptimal care for patients with brain tumors, complicating surgical decision-making and selection of chemoradiation regimens,” Hollon said.
Prior to DeepGlioma, surgeons did not have a method to differentiate diffuse gliomas during surgery. An idea that started in 2019, the system combines deep neural networks with an optical imaging method known as stimulated Raman histology, which was also developed at U-M, to image brain tumor tissue in real time.
“DeepGlioma creates an avenue for accurate and more timely identification that would give providers a better chance to define treatments and predict patient prognosis,” Hollon said.
Even with optimal standard-of-care treatment, patients with diffuse glioma face limited treatment options. The median survival time for patients with malignant diffuse gliomas is only 18 months.
While the development of medications to treat the tumors is essential, fewer than 10% of patients with glioma are enrolled in clinical trials, which often limit participation by molecular subgroups. Researchers hope that DeepGlioma can be a catalyst for early trial enrollment.
“Progress in the treatment of the most deadly brain tumors has been limited in the past decades- in part because it has been hard to identify the patients who would benefit most from targeted therapies,” said senior author Daniel Orringer, M.D., an associate professor of neurosurgery and pathology at NYU Grossman School of Medicine, who developed stimulated Raman histology. “Rapid methods for molecular classification hold great promise for rethinking clinical trial design and bringing new therapies to patients.”
Orphan receptor GPR158 serves as a metabotropic glycine receptor: mGlyR
Thibaut Laboute, Stefano Zucca, Matthew Holcomb, Dipak N. Patil, Chris Garza, Brittany A. Wheatley, Raktim N. Roy, Stefano Forli, Kirill A. Martemyanov. . Science
A common amino acid, glycine, can deliver a “slow-down” signal to the brain, likely contributing to major depression, anxiety and other mood disorders in some people, scientists at the Wertheim UF Scripps Institute for Biomedical Innovation & Technology have found.
The discovery, outlined in the journal Science, improves understanding of the biological causes of major depression and could accelerate efforts to develop new, faster-acting medications for such hard-to-treat mood disorders, said neuroscientist Kirill Martemyanov, Ph.D., corresponding author of the study.
“Most medications for people with depression take weeks before they kick in, if they do at all. New and better options are really needed,” said Martemyanov, who chairs the neuroscience department at the institute in Jupiter.
Major depression is among the world’s most urgent health needs. Its numbers have surged in recent years, especially among young adults. As depression’s disability, suicide numbers and medical expenses have climbed, a study by the U.S. Centers for Disease Control and Prevention in 2021 put its economic burden at $326 billion annually in the United States.
Martemyanov said he and his team of students and postdoctoral researchers have spent many years working toward this discovery. They didn’t set out to find a cause, much less a possible treatment route for depression. Instead, they asked a basic question: How do sensors on brain cells receive and transmit signals into the cells? Therein lay the key to understanding vision, pain, memory, behavior and possibly much more, Martemyanov suspected.
“It’s amazing how basic science goes. Fifteen years ago we discovered a binding partner for proteins we were interested in, which led us to this new receptor,” Martemyanov said. “We’ve been unspooling this for all this time.”
In 2018 the Martemyanov team found the new receptor was involved in stress-induced depression. If mice lacked the gene for the receptor, called GPR158, they proved surprisingly resilient to chronic stress.
That offered strong evidence that GPR158 could be therapeutic target, he said. But what sent the signal?
A breakthrough came in 2021, when his team solved the structure of GPR158. What they saw surprised them. The GPR158 receptor looked like a microscopic clamp with a compartment — akin to something they had seen in bacteria, not human cells.
“We were barking up the completely wrong tree before we saw the structure,” Martemyanov said. “We said, ‘Wow, that’s an amino acid receptor. There are only 20, so we screened them right away and only one fit perfectly. That was it. It was glycine.”
That wasn’t the only odd thing. The signaling molecule was not an activator in the cells, but an inhibitor. The business end of GPR158 connected to a partnering molecule that hit the brakes rather than the accelerator when bound to glycine.
“Usually receptors like GPR158, known as G protein Coupled Receptors, bind G proteins. This receptor was binding an RGS protein, which is a protein that has the opposite effect of activation,” said Thibaut Laboute, Ph.D., a postdoctoral researcher from Martemyanov’s group and first author of the study.
Scientists have been cataloging the role of cell receptors and their signaling partners for decades. Those that still don’t have known signalers, such as GPR158, have been dubbed “orphan receptors.”
The finding means that GPR158 is no longer an orphan receptor, Laboute said. Instead, the team renamed it mGlyR, short for “metabotropic glycine receptor.”
“An orphan receptor is a challenge. You want to figure out how it works,” Laboute said. “What makes me really excited about this discovery is that it may be important for people’s lives. That’s what gets me up in the morning.”
Laboute and Martemyanov are listed as inventors on a patent application describing methods to study GPR158 activity. Martemyanov is a cofounder of Blueshield Therapeutics, a startup company pursuing GPR158 as a drug target.
Glycine itself is sold as a nutritional supplement billed as improving mood. It is a basic building block of proteins and affects many different cell types, sometimes in complex ways. In some cells, it sends slow-down signals, while in other cell types, it sends excitatory signals. Some studies have linked glycine to the growth of invasive prostate cancer.
More research is needed to understand how the body maintains the right balance of mGlyR receptors and how brain cell activity is affected, he said. He intends to keep at it.
“We are in desperate need of new depression treatments,” Martemyanov said. “If we can target this with something specific, it makes sense that it could help. We are working on it now.”
Female dominance hierarchies influence responses to psychosocial stressors
by Lydia Smith-Osborne, Anh Duong, Alexis Resendez, Rupert Palme, Jonathan P. Fadok in Current Biology
Can an individual’s social status have an impact on their level of stress? Researchers at Tulane University put that question to the test and believe that social rank, particularly in females, does indeed affect the stress response.
In a study published in Current Biology, Tulane psychology professor Jonathan Fadok, PhD, and postdoctoral researcher Lydia Smith-Osborne looked at two forms of psychosocial stress — social isolation and social instability — and how they manifest themselves based on social rank.
They conducted their research on adult female mice, putting them in pairs and allowing them to form a stable social relationship over several days. In each pair, one of the mice had high, or dominant social status, while the other was considered the subordinate with relatively low social status. After establishing a baseline, they monitored changes in behavior, stress hormones and neuronal activation in response to chronic social stress.
“We analyzed how these different forms of stress impact behavior and the stress hormone corticosterone (an analogue of the human hormone, cortisol) in individuals based on their social rank,” said Fadok, an assistant professor in the Tulane Department of Psychology and the Tulane Brain Institute. “We also looked throughout the brain to identify brain areas that are activated in response to psychosocial stress.”
“We found that not only does rank inform how an individual responds to chronic psychosocial stress, but that the type of stress also matters,” said Smith-Osborne, a DVM/PhD and the first author on the study.
She discovered that mice with lower social status were more susceptible to social instability, which is akin to ever-changing or inconsistent social groups. Those with higher rank were more susceptible to social isolation, or loneliness.
There were also differences in the parts of the brain that became activated by social encounters, based upon the social status of the animal responding to it and whether they had experienced psychosocial stress.
“Some areas of a dominant animal’s brain would react differently to social isolation than to social uncertainty, for example,” Smith-Osborne said. “And this was also true for subordinates. Rank gave the animals a unique neurobiological ‘fingerprint’ for how they responded to chronic stress.”
Do the researchers think the results can translate to people? Perhaps, Fadok said.
“Overall, these findings may have implications for understanding the impact that social status and social networks have on the prevalence of stress-related mental illnesses such as generalized anxiety disorder and major depression,” he said. “However, future studies that use more complex social situations are needed before these results can translate to humans.”
MISC
Subscribe to Paradigm!
Medium, Twitter, Telegram, Telegram Chat, LinkedIn, and Reddit.
Main sources
Research articles