NS/ Tracing the evolution of the ‘little brain’
Neuroscience biweekly vol. 99, 22nd November — 6th December
TL;DR
- The evolution of higher cognitive functions in humans has so far mostly been linked to the expansion of the neocortex. Researchers are increasingly realizing, however, that the ‘little brain’ or cerebellum also expanded during evolution and probably contributes to the capacities unique to humans. A research team has now generated comprehensive genetic maps of the development of cells in the cerebella of humans, mice and opossums. Comparisons of these maps reveal both ancestral and species-specific cellular and molecular characteristics of cerebellum development.
- The gene-editing technology CRISPR shows early promise as a therapeutic strategy for the aggressive and difficult-to-treat brain cancer known as primary glioblastoma, according to new findings.
- An international team has shown that the injection of a type of stem cell into the brains of patients living with progressive multiple sclerosis (MS) is safe, well tolerated and has a long-lasting effect that appears to protect the brain from further damage.
- Researchers have discovered that a part of the brain associated with working memory and multisensory integration may also play an important role in how the brain processes social cues. Previous research has shown that neurons in the ventrolateral prefrontal cortex (VLPFC) integrate faces and voices — but new research shows that neurons in the VLPFC play a role in processing both the identity of the ‘speaker’ and the expression conveyed by facial gestures and vocalizations.
- Using a specialized device that translates images into sound, neuroscientists showed that people who are blind recognized basic faces using the part of the brain known as the fusiform face area, a region that is crucial for the processing of faces in sighted people.
- Scientists can now pinpoint where someone is looking just by listening to their ears. Following a discovery that the ears emit subtle sounds when the eyes move, a new report finds that decoding the sounds reveals where your eyes are looking. These faint ear sounds may fine-tune perception and could be used to develop innovative hearing tests.
- A team of researchers found that caffeic-acid-based Carbon Quantum Dots (CACQDs), which can be derived from spent coffee grounds, have the potential to protect brain cells from the damage caused by several neurodegenerative diseases.
- Facemap uses a mouse’s facial movements to predict brain activity, bringing researchers one step closer to understanding brain-wide signals driven by spontaneous behaviors.
- What is the mechanism that allows our brains to incorporate new information about the world, and form memories? New work by a team of neuroscientists shows that learning occurs through the continuous formation of new connectivity patterns between specific engram cells in different regions of the brain.
Neuroscience market
The global neuroscience market size was valued at USD 28.4 billion in 2016 and it is expected to reach USD 38.9 billion by 2027.
The latest news and research
Cellular development and evolution of the mammalian cerebellum
by Mari Sepp, Kevin Leiss, Florent Murat, Konstantin Okonechnikov, Piyush Joshi, Evgeny Leushkin, Lisa Spänig, Noe Mbengue, Céline Schneider, Julia Schmidt, Nils Trost, Maria Schauer, Philipp Khaitovich, Steven Lisgo, Miklós Palkovits, Peter Giere, Lena M. Kutscher, Simon Anders, Margarida Cardoso-Moreira, Ioannis Sarropoulos, Stefan M. Pfister, Henrik Kaessmann in Nature
The evolution of higher cognitive functions in humans has so far mostly been linked to the expansion of the neocortex. Researchers are increasingly realizing, however, that the “little brain” or cerebellum also expanded during evolution and probably contributes to the capacities unique to humans. A Heidelberg research team has now generated comprehensive genetic maps of the development of cells in the cerebella of human, mouse and opossum. Comparisons of these maps reveal both ancestral and species-specific cellular and molecular characteristics of cerebellum development.
“Although the cerebellum, a structure at the back of the skull, contains about 80 percent of all neurons in the whole human brain, this was long considered a brain region with a rather simple cellular architecture,” explains Prof. Kaessmann. In recent times, however, evidence suggesting a pronounced heterogeneity within this structure has been growing says the molecular biologist.
The Heidelberg researchers have now systematically classified all cell types in the developing cerebellum of human, mouse and opossum.
To do so they first collected molecular profiles from almost 400,000 individual cells using single-cell sequencing technologies.
They also employed procedures enabling spatial mapping of the cell types.
On the basis of these data the scientists noted that in the human cerebellum the proportion of Purkinje cells — large, complex neurons with key functions in the cerebellum — is almost double that of mouse and opossum in the early stages of fetal development.
This increase is primarily driven by specific subtypes of Purkinje cells that are generated first during development and likely communicate with neocortical areas involved in cognitive functions in the mature brain.
“It stands to reason that the expansion of these specific types of Purkinje cells during human evolution supports higher cognitive functions in humans,” explains Dr Mari Sepp, a postdoctoral researcher in Prof.
Kaessmann’s research group “Functional evolution of mammalian genomes.”
Using bioinformatic approaches, the researchers also compared the gene expression programmes in cerebellum cells of human, mouse and opossum.
These programmes are defined by the fine-tuned activities of a myriad of genes that determine the types into which cells differentiate in the course of development.
Genes with cell-type-specific activity profiles were identified that have been conserved across species for at least about 160 million years of evolution.
According to Henrik Kaessmann, this suggests that they are important for fundamental mechanisms that determine cell type identities in the mammalian cerebellum.
At the same time, the scientists identified over 1,000 genes with activity profiles differing between human, mouse and opossum.
“At the level of cell types, it happens fairly frequently that genes obtain new activity profiles. This means that ancestral genes, present in all mammals, become active in new cell types during evolution, potentially changing the properties of these cells,” says Dr Kevin Leiss, who — at the time of the studies — was a doctoral student in Prof. Kaessmann’s research group.
Among the genes showing activity profiles that differ between human and mouse — the most frequently used model organism in biomedical research — several are associated with neurodevelopmental disorders or childhood brain tumours, Prof. Pfister explains. He is a director at the Hopp Children’s Cancer Center Heidelberg, heads a research division at the German Cancer Research Center and is a consultant paediatric oncologist at Heidelberg University Hospital.
The results of the study could, as Prof. Pfister suggests, provide valuable guidance in the search for suitable model systems — beyond the mouse model — to further explore such diseases.
The research results were published in the journal Nature. Also participating in the studies — apart from the Heidelberg scientists — were researchers from Berlin as well as China, France, Hungary, and the United Kingdom. The European Research Council financed the research. The data are available in a public database.
Targeting the non-coding genome and temozolomide signature enables CRISPR-mediated glioma oncolysis
by I-Li Tan, Alexendar R. Perez, Rachel J. Lew, Xiaoyu Sun, Alisha Baldwin, Yong K. Zhu, Mihir M. Shah, Mitchel S. Berger, Jennifer A. Doudna, Christof Fellmann in Cell Reports
The gene-editing technology CRISPR shows early promise as a therapeutic strategy for the aggressive and difficult-to-treat brain cancer known as primary glioblastoma, according to findings of a new study from Gladstone Institutes.
Using a novel technique they’ve dubbed “cancer shredding,” the researchers programmed CRISPR to zero-in on repeating DNA sequences present only in recurrent tumor cells — and then obliterate those cells by snipping away at them. Working with cell lines from a patient whose glioblastoma returned after prior treatments, the team used CRISPR to destroy the tumor cells while sparing healthy cells.
“Glioblastoma is the most common lethal brain cancer, and patients still don’t have any good treatment options,” says Christof Fellmann, PhD, who led the study at Gladstone. “Patients typically receive chemotherapy, radiation, and surgery, but most relapse in matter of months. We wanted to find out if we could do something outside the box that could get around this problem of recurrence.”
Cancer treatments rarely kill all tumor cells. In glioblastoma, as with many other highly recurrent cancers, tumor cells that escape treatment develop multiple genetic adaptations, or mutations, that allow them to proliferate. Building from their earlier research, the Gladstone team surmised that these mutated cells have a unique genetic signature that could be targeted.
Using computational methods to analyze whole genomes of cancer cells, the team dove deep into the non-coding DNA to identify repetitive code all of them shared, even if they harbored a different variety of mutations. Then, armed with that data, they were able to guide CRISPR to the mutated cancerous cells and destroy them.
“We see CRISPR as a gateway to a new therapeutic approach that won’t be subject to the possibility of tumor cell escape,” Fellmann says. “Cancer shredding could hold potential not only for glioblastoma, but possibly for other hypermutated tumors.”
The findings, in Cell Reports, are available online. Much of the work was conducted in the lab of Gladstone Senior Investigator Jennifer Doudna, PhD, an author of the paper, who received the 2020 Nobel Prize in Chemistry for her co-discovery of the CRISPR-Cas9 gene editing technology. Also playing a key role in the study was Mitchel Berger, MD, a neurosurgeon and director of the Brain Tumor Center at UCSF, whose team helped secure patient-derived cell samples that bolstered clinical relevance of the results, and Alexendar Perez, MD, PhD, a resident at UCSF who did much of the computational work.
Until very recently, CRISPR has been used mainly in the development of therapies or as a valued research tool, but not as a treatment modality in itself. That changed in mid-November when UK regulators approved the first CRISPR-based therapy, which is designed to cure sickle cell disease and beta thalassemia. In the US, the FDA is expected to issue a decision on the same therapeutic approach in early December.
The team behind the new Gladstone study say much work is needed to advance their promising findings into a therapy that’s ready to be tested in patients. Among the remaining challenges are determining how CRISPR should be delivered to patients with glioblastoma, and how to ensure no unintended off-target effects.
But despite the unanswered questions, first author I-Li Tan, PhD — who completed the study as a postdoctoral researcher in Doudna’s Gladstone lab and focused on brain cancer as a PhD student — says she feels hopeful about a disease that has vexed scientists for more than a decade.
“We understand so much today about glioblastoma and its biology, yet the treatment regimens haven’t improved,” Tan says. “Now we have a precise way to target the cells that are driving the cancer, and we hope this may one day lead to a cure.”
Phase I clinical trial of intracerebroventricular transplantation of allogeneic neural stem cells in people with progressive multiple sclerosis
by Claudia Ricciolini, Simonetta Sabatini, Giada Silveri, Cristina Spera, Daniel Stephenson, Giuseppe Stipa, Elettra Tinella, Michele Zarrelli, Chiara Zecca, Yendri Ventura, Angelo D’Alessandro, Luca Peruzzotti-Jametti, Stefano Pluchino, Angelo L. Vescovi in Cell Stem Cell
An international team has shown that the injection of a type of stem cell into the brains of patients living with progressive multiple sclerosis (MS) is safe, well tolerated and has a long-lasting effect that appears to protect the brain from further damage.
The study, led by scientists at the University of Cambridge, University of Milan Bicocca and Hospital Casa Sollievo della Sofferenza (Italy), is a step towards developing an advanced cell therapy treatment for progressive MS.
Over 2 million people live with MS worldwide, and while treatments exist that can reduce the severity and frequency of relapses, two-thirds of MS patients still transition into a debilitating secondary progressive phase of disease within 25–30 years of diagnosis, where disability grows steadily worse.
In MS, the body’s own immune system attacks and damages myelin, the protective sheath around nerve fibres, causing disruption to messages sent around the brain and spinal cord.
Key immune cells involved in this process are macrophages (literally ‘big eaters’), which ordinarily attack and rid the body of unwanted intruders. A particular type of macrophage known as a microglial cell is found throughout the brain and spinal cord. In progressive forms of MS, they attack the central nervous system (CNS), causing chronic inflammation and damage to nerve cells.
Recent advances have raised expectations that stem cell therapies might help ameliorate this damage. These involve the transplantation of stem cells, the body’s ‘master cells’, which can be programmed to develop into almost any type of cell within the body.
Previous work from the Cambridge team has shown in mice that skin cells re-programmed into brain stem cells, transplanted into the central nervous system, can help reduce inflammation and may be able to help repair damage caused by MS.
Now, in research published in the Cell Stem Cell, scientists have completed a first-in-man, early-stage clinical trial that involved injecting neural stem cells directly into the brains of 15 patients with secondary MS recruited from two hospitals in Italy. The trial was conducted by teams at the University of Cambridge, Milan Bicocca and the Hospitals Casa Sollievo della Sofferenza and S. Maria Terni (IT) and Ente Ospedaliero Cantonale (Lugano, Switzerland) and the University of Colorado (USA).
The stem cells were derived from cells taken from brain tissue from a single, miscarried fetal donor. The Italian team had previously shown that it would be possible to produce a virtually limitless supply of these stem cells from a single donor — and in future it may be possible to derive these cells directly from the patient — helping to overcome practical problems associated with the use of allogeneic fetal tissue.
The team followed the patients over 12 months, during which time they observed no treatment-related deaths or serious adverse events. While some side-effects were observed, all were either temporary or reversible.
All the patients showed high levels of disability at the start of the trial — most required a wheelchair, for example — but during the 12 month follow up period none showed any increase in disability or a worsening of symptoms. None of the patients reported symptoms that suggested a relapse and nor did their cognitive function worsen significantly during the study. Overall, say the researchers, this points to a substantial stability of the disease, without signs of progression, though the high levels of disability at the start of the trial make this difficult to confirm.
The researchers assessed a subgroup of patients for changes in the volume of brain tissue associated with disease progression. They found that the larger the dose of injected stem cells, the smaller the reduction in this brain volume over time. They speculate that this may be because the stem cell transplant dampened inflammation.
The team also looked for signs that the stem cells were having a neuroprotective effect — that is, protecting nerve cells from further damage. Their previous work showed how tweaking metabolism — how the body produces energy — can in turn reprogram microglia from ‘bad’ to ‘good’. In this new study, they looked at how the brain’s metabolism changes after the treatment. They measured changes in the fluid around the brain and in the blood over time and found certain signs that are linked to how the brain processes fatty acids. These signs were connected to how well the treatment works and how the disease develops. The higher the dose of stem cells, the greater the levels of fatty acids, which also persisted over the 12-month period.
Professor Stefano Pluchino from the University of Cambridge, who co-led the study, said: “We desperately need to develop new treatments for secondary progressive MS, and I am cautiously very excited about our findings, which are a step towards developing a cell therapy for treating MS.
“We recognise that our study has limitations — it was only a small study and there may have been confounding effects from the immunosuppressant drugs, for example — but the fact that our treatment was safe and that its effects lasted over the 12 months of the trial means that we can proceed to the next stage of clinical trials.”
Co-leader Professor Angelo Vescovi from the University of Milano-Bicocca said: “It has taken nearly three decades to translate the discovery of brain stem cells into this experimental therapeutic treatment This study will add to the increasing excitement in this field and pave the way to broader efficacy studies, soon to come.”
Caitlin Astbury, Research Communications Manager at the MS Society, says: “This is a really exciting study which builds on previous research funded by us. These results show that special stem cells injected into the brain were safe and well-tolerated by people with secondary progressive MS. They also suggest this treatment approach might even stabilise disability progression. We’ve known for some time that this method has the potential to help protect the brain from progression in MS.
“This was a very small, early-stage study and we need further clinical trials to find out if this treatment has a beneficial effect on the condition. But this is an encouraging step towards a new way of treating some people with MS.”
Neuronal Population Encoding of Identity in Primate Prefrontal Cortex
by KK Sharma, MA Diltz, T Lincoln, ER Albuquerque, LM Romanski in The Journal of Neuroscience
Researchers have discovered that a part of the brain associated with working memory and multisensory integration may also play an important role in how the brain processes social cues. Previous research has shown that neurons in the ventrolateral prefrontal cortex (VLPFC) integrate faces and voices — but new research, in the Journal of Neuroscience, shows that neurons in the VLPFC play a role in processing both the identity of the “speaker” and the expression conveyed by facial gestures and vocalizations.
“We still don’t fully understand how facial and vocal information is combined and what information is processed by different brain regions,” said Lizabeth Romanski, PhD, associate professor of Neuroscience at the Del Monte Institute for Neuroscience at the University of Rochester and senior author of the study. “However, these findings confirm VLPFC as a critical node in the social communication network that processes facial expressions, vocalizations, and social cues.”
The VLPFC is an area of the brain that is enlarged in primates, including humans and macaques. In this study, the Romanski Lab showed rhesus macaques short videos of other macaques engaging in vocalizations/expressions that were friendly, aggressive, or neutral. They recorded the activity of more than 400 neurons in the VLPFC and found that individually, the cells did not exhibit strong categorical responses to the expressions or the identities of the macaques in the videos. However, when the researchers combined the neurons as a population a machine learning model could be trained to decode the expression and identity in the videos based only on the patterns of neural activity, suggesting that neurons were collectively responding to these variables. Overall, the activity of the population of VLPFC neurons was primarily dictated by the identity of the macaque in the video. These findings suggest that the VLPFC is a key brain region in the processing of social cues.
“We used dynamic, information-rich stimuli in our study and the responses we saw from single neurons were very complex. Initially, it was difficult to make sense of the data,” said Keshov Sharma, PhD, lead author on the study. “It wasn’t until we studied how population activity correlated with the social information in our stimuli that we found a coherent structure. For us, it was like finally seeing a forest instead of a muddle of trees.” Sharma and Romanski hope their approach will encourage others to analyze population-level activity when studying how faces and voices are integrated in the brain.
Understanding how the prefrontal cortex processes auditory and visual information is a cornerstone of the Romanski lab. This process is necessary for recognizing objects by sight, as well as sound, and is required for effective communication. In previous research, the Romanski Lab identified the VLPFC as an area of the brain responsible for maintaining and integrating face and vocal information during working memory. This body of research points to the importance of this brain region within the larger circuit that underlies social communication.
“Knowing what features populations of neurons extract from face and vocal stimuli and how these features are typically integrated will help us to understand what may be altered in speech and communication disorders, including autism spectrum disorders, where multiple sensory stimuli may not combine optimally,” Romanski said.
Sound-encoded faces activate the left fusiform face area in the early blind
by Paula L. Plaza, Laurent Renier, Stephanie Rosemann, Anne G. De Volder, Josef P. Rauschecker in PLOS ONE
Using a specialized device that translates images into sound, Georgetown University Medical Center neuroscientists and colleagues showed that people who are blind recognized basic faces using the part of the brain known as the fusiform face area, a region that is crucial for the processing of faces in sighted people.
“It’s been known for some time that people who are blind can compensate for their loss of vision, to a certain extent, by using their other senses,” says Josef Rauschecker, Ph.D., D.Sc., professor in the Department of Neuroscience at Georgetown University and senior author of this study. “Our study tested the extent to which this plasticity, or compensation, between seeing and hearing exists by encoding basic visual patterns into auditory patterns with the aid of a technical device we refer to as a sensory substitution device. With the use of functional magnetic resonance imaging (fMRI), we can determine where in the brain this compensatory plasticity is taking place.”
Face perception in humans and nonhuman primates is accomplished by a patchwork of specialized cortical regions. How these regions develop has remained controversial. Due to their importance for social behavior, many researchers believe that the neural mechanisms for face recognition are innate in primates or depend on early visual experience with faces.
“Our results from people who are blind implies that fusiform face area development does not depend on experience with actual visual faces but on exposure to the geometry of facial configurations, which can be conveyed by other sensory modalities,” Rauschecker adds.
Paula Plaza, Ph.D., one of the lead authors of the study, who is now at Universidad Andres Bello, Chile, says, “Our study demonstrates that the fusiform face area encodes the ‘concept’ of a face regardless of input channel, or the visual experience, which is an important discovery.”
Six people who are blind and 10 sighted people, who served as control subjects, went through three rounds of functional MRI scans to see what parts of the brain were being activated during the translations from image into sound. The scientists found that brain activation by sound in people who are blind was found primarily in the left fusiform face area while face processing in sighted people occurred mostly in the right fusiform face area.
“We believe the left/right difference between people who are and aren’t blind may have to do with how the left and right sides of the fusiform area processes faces — either as connected patterns or as separate parts, which may be an important clue in helping us refine our sensory substitution device,” says Rauschecker, who is also co-director of the Center for Neuroengineering at Georgetown University.
Currently, with their device, people who are blind can recognize a basic ‘cartoon’ face (such as an emoji happy face) when it is transcribed into sound patterns. Recognizing faces via sounds was a time-intensive process that took many practice sessions. Each session started with getting people to recognize simple geometrical shapes, such as horizontal and vertical lines; complexity of the stimuli was then gradually increased, so the lines formed shapes, such as houses or faces, which then became even more complex (tall versus wide houses and happy faces versus sad faces).
Ultimately, the scientists would like to use pictures of real faces and houses in combination with their device, but the researchers note that they would first have to greatly increase the resolution of the device.
“We would love to be able to find out whether it is possible for people who are blind to learn to recognize individuals from their pictures. This may need a lot more practice with our device but now that we’ve pinpointed the region of the brain where the translation is taking place, we may have a better handle on how to fine-tune our processes,” Rauschecker concludes.
Parametric information about eye movements is sent to the ears
by Stephanie N. Lovich, Cynthia D. King, David L. K. Murphy, Rachel E. Landrum, Christopher A. Shera, Jennifer M. Groh in Proceedings of the National Academy of Sciences
Scientists can now pinpoint where someone’s eyes are looking just by listening to their ears.
“You can actually estimate the movement of the eyes, the position of the target that the eyes are going to look at, just from recordings made with a microphone in the ear canal,” said Jennifer Groh, Ph.D., senior author of the new report, and a professor in the departments of psychology & neuroscience as well as neurobiology at Duke University.
In 2018, Groh’s team discovered that the ears make a subtle, imperceptible noise when the eyes move. In a new report, the Duke team now shows that these sounds can reveal where your eyes are looking.
It also works the other way around. Just by knowing where someone is looking, Groh and her team were able to predict what the waveform of the subtle ear sound would look like.
These sounds, Groh believes, may be caused when eye movements stimulate the brain to contract either middle ear muscles, which typically help dampen loud sounds, or the hair cells that help amplify quiet sounds.
The exact purpose of these ear squeaks is unclear, but Groh’s initial hunch is that it might help sharpen people’s perception.
“We think this is part of a system for allowing the brain to match up where sights and sounds are located, even though our eyes can move when our head and ears do not,” Groh said.
Understanding the relationship between subtle ear sounds and vision might lead to the development of new clinical tests for hearing.
“If each part of the ear contributes individual rules for the eardrum signal, then they could be used as a type of clinical tool to assess which part of the anatomy in the ear is malfunctioning,” said Stephanie Lovich, one of the lead authors of the paper and a graduate student in psychology & neuroscience at Duke.
Just as the eye’s pupils constrict or dilate like a camera’s aperture to adjust how much light gets in, the ears too have their own way to regulate hearing. Scientists long thought that these sound-regulating mechanisms only helped to amplify soft sounds or dampen loud ones. But in 2018, Groh and her team discovered that these same sound-regulating mechanisms were also activated by eye movements, suggesting that the brain informs the ears about the eye’s movements.
In their latest study, the research team followed up on their initial discovery and investigated whether the faint auditory signals contained detailed information about the eye movements.
To decode people’s ear sounds, Groh’s team at Duke and Professor Christopher Shera, Ph.D. from the University of Southern California, recruited 16 adults with unimpaired vision and hearing to Groh’s lab in Durham to take a fairly simple eye test.
Participants looked at a static green dot on a computer screen, then, without moving their heads, tracked the dot with their eyes as it disappeared and then reappeared either up, down, left, right, or diagonal from the starting point. This gave Groh’s team a wide-range of auditory signals generated as the eyes moved horizontally, vertically, or diagonally.
An eye tracker recorded where participant’s pupils were darting to compare against the ear sounds, which were captured using a microphone-embedded pair of earbuds.
The research team analyzed the ear sounds and found unique signatures for different directions of movement. This enabled them to crack the ear sound’s code and calculate where people were looking just by scrutinizing a soundwave.
“Since a diagonal eye movement is just a horizontal component and vertical component, my labmate and co-author David Murphy realized you can take those two components and guess what they would be if you put them together,” Lovich said. “Then you can go in the opposite direction and look at an oscillation to predict that someone was looking 30 degrees to the left.”
Groh is now starting to examine whether these ear sounds play a role in perception.
One set of projects is focused on how eye-movement ear sounds may be different in people with hearing or vision loss.
Groh is also testing whether people who don’t have hearing or vision loss will generate ear signals that can predict how well they do on a sound localization task, like spotting where an ambulance is while driving, which relies on mapping auditory information onto a visual scene.
“Some folks have a really reproducible signal day-to-day, and you can measure it quickly,” Groh said. “You might expect those folks to be really good at a visual-auditory task compared to other folks, where it’s more variable.”
Caffeic acid recarbonization: A green chemistry, sustainable carbon nano material platform to intervene in neurodegeneration induced by emerging contaminants
by Jyotish Kumar, Sofia A. Delgado, Hemen Sarma, Mahesh Narayan in Environmental Research
Neurodegenerative disorders, including Alzheimer’s, Parkinson’s and Huntington’s, affect millions of people in the United States, and the cost of caring for people who live with these conditions adds up to hundreds of billions of dollars each year.
Now, researchers from The University of Texas at El Paso may potentially have found a solution in used coffee grounds — a material that is discarded from homes and businesses around the world every day.
A team led by Jyotish Kumar, a doctoral student in the Department of Chemistry and Biochemistry, and overseen by Mahesh Narayan, Ph.D., a professor and Fellow of the Royal Society of Chemistry in the same department, found that caffeic-acid based Carbon Quantum Dots (CACQDs), which can be derived from spent coffee grounds, have the potential to protect brain cells from the damage caused by several neurodegenerative diseases — if the condition is triggered by factors such as obesity, age and exposure to pesticides and other toxic environmental chemicals.
“Caffeic-acid based Carbon Quantum Dots have the potential to be transformative in the treatment of neurodegenerative disorders,” Kumar said. “This is because none of the current treatments resolve the diseases; they only help manage the symptoms. Our aim is to find a cure by addressing the atomic and molecular underpinnings that drive these conditions.”
Neurodegenerative diseases are primarily characterized by the loss of neurons or brain cells. They inhibit a person’s ability to perform basic functions such as movement and speech, as well as more complicated tasks including bladder and bowel functions, and cognitive abilities.
The disorders, when they are in their early stages and are caused by lifestyle or environmental factors, share several traits. These include elevated levels of free radicals — harmful molecules that are known to contribute to other diseases such as cancer, heart disease and vision loss — in the brain, and the aggregation of fragments of amyloid-forming proteins that can lead to plaques or fibrils in the brain.
Kumar and his colleagues found that CACQDs were neuroprotective across test tube experiments, cell lines and other models of Parkinson’s disease when the disorder was caused by a pesticide called paraquat. The CACQDs, the team observed, were able to remove free radicals or prevent them from causing damage and inhibited the aggregation of amyloid protein fragments without causing any significant side effects.
The team hypothesizes that in humans, in the very early stage of a condition such as Alzheimer’s or Parkinson’s, a treatment based on CACQDs can be effective in preventing full-on disease.
“It is critical to address these disorders before they reach the clinical stage,” Narayan said. “At that point, it is likely too late. Any current treatments that can address advanced symptoms of neurodegenerative disease are simply beyond the means of most people. Our aim is to come up with a solution that can prevent most cases of these conditions at a cost that is manageable for as many patients as possible.”
Caffeic acid belongs to a family of compounds called polyphenols, which are plant-based compounds known for their antioxidant, or free radical-scavenging properties. Caffeic acid is unique because it can penetrate the blood-brain barrier and is thus able to exert its effects upon the cells inside the brain, Narayan said.
The process the team uses to extract CACQDs from used coffee grounds is considered “green chemistry,” which means it is environmentally friendly. In their lab, the team “cooks” samples of coffee grounds at 200 degrees for four hours to reorient the caffeic acid’s carbon structure and form CACQDs. The sheer abundance of coffee grounds is what makes the process both economical and sustainable, Narayan said.
Facemap: a framework for modeling neural activity based on orofacial tracking
by Atika Syeda, Lin Zhong, Renee Tung, Will Long, Marius Pachitariu, Carsen Stringer in Nature Neuroscience
Mice are always in motion. Even if there’s no external motivation for their actions — like a cat lurking a few feet away — mice are constantly sweeping their whiskers back and forth, sniffing around their environment and grooming themselves.
These spontaneous actions light up neurons across many different regions of the brain, providing a neural representation of what the animal is doing moment-by-moment across the brain. But how the brain uses these persistent, widespread signals remains a mystery.
Now, scientists at HHMI’s Janelia Research Campus have developed a tool that could bring researchers one step closer to understanding these enigmatic brain-wide signals. The tool, known as Facemap, uses deep neural networks to relate information about a mouse’s eye, whisker, nose, and mouth movements to neural activity in the brain.
“The goal is: What are those behaviors that are being represented in those brain regions? And, if a lot of that information is in the facial movements, then how can we track that better?” says Atika Syeda, a graduate student in the Stringer Lab and lead author of a new paper describing the research.
The idea to create a better tool for understanding brain-wide signals grew out of previous research from Janelia Group Leaders Carsen Stringer and Marius Pachitariu. They found that activity in many different areas across a mouse’s brain — long thought to be background noise — are signals driven by these spontaneous behaviors. Still unclear, however, was how the brain uses this information.
“The first step in really answering that question is understanding what are the movements that are driving this activity, and what exactly is represented in these brain areas,” Stringer says.
To do this, researchers need to be able to track and quantify movements and correlate them with brain activity. But the tools enabling scientists to do such experiments weren’t optimized for use in mice, so researchers haven’t been able to get the information they need.
“All of these different brain areas are driven by these movements, which is why we think it is really important to get a better handle on what these movements actually are because our previous techniques really couldn’t tell us what they were,” Stringer says.
To address this shortcoming, the team looked at 2,400 video frames and labeled distinct points on the mouse face corresponding to different facial movements associated with spontaneous behaviors. They homed in on 13 key points on the face that represent individual behaviors, like whisking, grooming, and licking.
The team had first developed a neural network-based model that could identify these key points on in videos of mouse faces collected in the lab under various experimental setups.
They then developed another deep neural network-based model to correlate this key facial point data representing mouse movement to neural activity, allowing them to see how a mouse’s spontaneous behaviors drive neural activity in a particular brain region.
Facemap is more accurate and faster than previous methods used to track orofacial movements and behaviors in mice. The tool is also specifically designed to track mouse faces and has been pretrained to track many different mouse movements. These factors make Facemap a particularly effective tool: The model can predict twice as much neural activity in mice compared to prior methods.
In earlier work, the team found that spontaneous behaviors activated neurons in the visual cortex, the brain region that processes visual information from the eye. Using Facemap, they discovered that these neuronal activity clusters were more spread out across this region of the brain than previously thought.
Facemap is freely available and easy to use. Hundreds of researchers around the world have already downloaded the tool since it was released last year.
“This is something that if anyone wanted to get started, they could download Facemap, run their videos, and get their results on the same day,” Syeda says. “It just makes research, in general, much easier.”
Engram cell connectivity as a mechanism for information encoding and memory function
by Clara Ortega-de San Luis, Maurizio Pezzoli, Esteban Urrieta, Tomás J. Ryan in Current Biology
What is the mechanism that allows our brains to incorporate new information about the world, and form memories? New work by a team of neuroscientists led by Dr Tomás Ryan from Trinity College Dublin shows that learning occurs through the continuous formation of new connectivity patterns between specific engram cells in different regions of the brain.
Whether on purpose, incidentally, or simply by accident, we are constantly learning and so our brains are constantly changing. When we navigate the world, interact with each other, or consume media content, our brain is grasping information, and creating new memories.
The next time we walk down the street, meet our friends, or come across something that reminds us of the last podcast we listened to, we will quickly re-engage that memory information somewhere in our brain. But how do these experiences modify our neurons to allow us to form these new memories?
Our brains are organs composed of dynamic networks of cells, always in a state of flux due to growing up, aging, degeneration, regeneration, everyday noise, and learning. The challenge for scientists is to identify the “difference that makes a difference” for forming a memory — the change in a brain that stores a memory is refereed to as an ‘engram’, which retains information for later use.
This newly published study aimed to understand how information may be stored as engrams in the brain.
Dr Clara Ortega-de San Luis, Postdoctoral Research Fellow in the Ryan Lab and lead author of the article published in Current Biology, said:
“Memory engram cells are groups of brain cells that, activated by specific experiences, change themselves to incorporate and thereby hold information in our brain. Reactivation of these ‘building blocks’ of memories triggers the recall of the specific experiences associated to them. The question is, how do engrams store meaningful information about the world?”
To identify and study the changes that engrams undergo that allow us to encode a memory, the team of researchers studied a form of learning in which two experiences that are similar to each other become linked by the nature of their content.
The researchers used a paradigm in which animals learned to identify different contexts and form associations between them. By using genetic techniques the team crucially labelled two different populations of engram cells in the brain for two discrete memories, and then monitored how learning manifested in the formation of new connections between those engram cells.
Then using optogenetics, which allow brain cell activity to be controlled with light, they further demonstrated how these new formed connections were required for the learning to occur. In doing so, they identified a molecular mechanism mediated by a specific protein located in the synapse that is involved in regulating the connectivity between engram cells.
This study provides direct evidence for changes in synaptic wiring connectivity between engram cells to be considered as a likely mechanism for memory storage in the brain.
Commenting on the study, Dr Ryan, Associate Professor in Trinity’s School of Biochemistry and Immunology, Trinity Biomedical Sciences Institute, and the Trinity College Institute of Neuroscience,said:
“Understanding the cellular mechanisms that allow learning to occur helps us to comprehend not only how we form new memories or modify those pre-existent ones, but also advance our knowledge towards disentangling how the brain works and the mechanisms needed for it to process thoughts and information.
“In 21st century neuroscience, many of us like to think memories are being stored in engram cells, or their sub-components. This study argues that rather than looking for information within or at cells, we should search for information between cells, and that learning may work by altering the wiring diagram of the brain — less like a computer and more like a developing sculpture.
“In other words, the engram is not in the cell; the cell is in the engram.”
Lateral hypothalamic proenkephalin neurons drive threat-induced overeating associated with a negative emotional state
by In-Jee You, Yeeun Bae, Alec R. Beck, Sora Shin in Nature Communications
If you’ve had a near miss accident in your car or suffered the intimidation of a menacing person, you’ve probably felt it — a psychological reaction to a threat called a fight or flight response. Your heart rate climbs, anxiety washes over you, you might shake or sweat.
But hours after that stress passes, you may feel another response — a powerful desire for comfort food, that highly processed, high-fat stuff you know isn’t good for you.
It can relieve stress and tension and provide a sense of control.
Emotional eating following a stress-triggering interaction is familiar to many of us, and to scientists as well.
But how a threat signals your brain to want comfort food has been unknown.
Now, a Virginia Tech scientist has pinpointed a molecule found in a region of the brain called the hypothalamus that is connected to changes in the brain that lead to emotional overeating.
Sora Shin, assistant professor at the Fralin Biomedical Research Institute at VTC, and her research team described the discovery in a paper published in Nature Communications.
“We don’t always eat because we are hungry and we have certain physical needs,” said Shin, who is also an assistant professor in the Department of Human Nutrition, Foods, and Exercise in Virginia Tech’s College of Agriculture and Life Sciences.
“Whenever we get stressed or feel some threat, then it can also trigger our eating motivation. We think this molecule is the culprit.”
Shin and her research team began their study by investigating a small molecule, Proenkephalin.
This molecule is common in multiple parts of the brain, but little research had examined its role in the hypothalamus.
Shin suspected it played a role in stress and eating because the hypothalamus is a center for regulating eating behavior.
The lab exposed mice to the odor of cat feces. The odor of a natural predator triggered a threat response in the mice, and 24 hours later, the mice exhibited a negative emotional state, overeating behavior, and neurons in their brains showed sensitivity to consumption of high-fat foods.
To confirm the role of the molecule in stress-induced eating, the researchers activated the same neurons artificially with light stimulating a genetically encoded molecule expressed in the neuronal cell’s membrane, without the predator scent, and saw a similar response.
In addition, when they exposed the mice to the cat odor and quieted the reaction of the neurons expressing that molecule with the same technique, the mice showed no negative emotional state and didn’t overeat.
“So something about this molecule itself is very critical to inducing overconsumption after the threat,” Shin said.
The discovery points toward a possible target for therapy to alleviate emotionally triggered eating.
“We have much more to learn about this molecule,” Shin said, “but we found its location and it could be a good starting point.”
Subscribe to Paradigm!
Medium, Twitter, Telegram, Telegram Chat, LinkedIn, and Reddit.
Main sources
Research articles