NT/ Changing the connection between the hemispheres affects speech perception

Paradigm
Paradigm
Published in
34 min readFeb 19, 2021

Neuroscience biweekly vol. 26, 8th February — 19th February

TL;DR

Neuroscience market

The global neuroscience market size was valued at USD 28.4 billion in 2016 and it is expected to reach USD 38.9 billion by 2027.

Latest news and researches

Selective modulation of interhemispheric connectivity by transcranial alternating current stimulation influences binaural integration

by Basil C. Preisig, Lars Riecke, Matthias J. Sjerps, Anne Kösem, Benjamin R. Kop, Bob Bramson, Peter Hagoort, Alexis Hervais-Adelman in Proceedings of the National Academy of Sciences

When we listen to speech sounds, our brain needs to combine information from both hemispheres. How does the brain integrate acoustic information from remote areas? In a neuroimaging study, a team of researchers led by the Max Planck Institute of Psycholinguistics, the Donders Institute and the University of Zurich applied electrical stimulation to participants’ brains during a listening task. The stimulation affected the connection between the two hemispheres, which in turn changed participants’ listening behaviour.

When we listen to speech sounds, the information that enters our left and right ear is not exactly the same. This may be because acoustic information reaches one ear before the other, or because the sound is perceived as louder by one of the ears. Information about speech sounds also reaches different parts of our brain, and the two hemispheres are specialised in processing different types of acoustic information. But how does the brain integrate auditory information from different areas?

To investigate this question, lead researcher Basil Preisig from the University of Zurich collaborated with an international team of scientists. In an earlier study, the team discovered that the brain integrates information about speech sounds by ‘balancing’ the rhythm of gamma waves across the hemispheres — a process called ‘oscillatory synchronisation’. Preisig and his colleagues also found that they could influence the integration of speech sounds by changing the balancing process between the hemispheres. However, it was still unclear where in the brain this process occurred.

Did you hear ‘ga’ or ‘da’?

The researchers decided to apply electric brain stimulation (high density transcranial alternating current stimulation or HD-TACS) to 28 healthy volunteers while their brains were being scanned (with fMRI) at the Donders Centre for Cognitive Neuroimaging in Nijmegen. They created a syllable that was somewhere in between ‘ga’ and ‘da’, and played this ambiguous syllable to the right ear of the participants. At the same time, the disambiguating information was played to the left ear. Participants were asked to indicate whether they heard ‘ga’ or ‘da’ by pressing a button. Would changing the connection between the two hemispheres also change the way the participants integrated information played to the left and right ear?

The scientists disrupted the ‘balance’ of gamma waves between the two hemispheres, which in turn affected what the participants reported to hear (‘ga’ or ‘da’).

(Top) Stimulation electrodes were centered over CP6 (right hemisphere) and CP5 (left hemisphere). (Middle) The interhemispheric phase synchrony was manipulated using 40 Hz TACS with an interhemispheric phase lag of 0° (TACS 0°) or 180° (dotted line, TACS 180°). The colors represent the polarity (positive = red; negative = blue) of the current for the time stamp highlighted by the dotted line. (Bottom) Simulation of the electric field strength induced by bihemispheric TACS in a template brain. LH: Left hemisphere; RH: Right hemisphere.

Phantom perception

“This is the first demonstration in the auditory domain that interhemispheric connectivity is important for the integration of speech sound information,” says Preisig. “This work paves the way for investigating other sensory modalities and more complex auditory stimulation.” “These results give us valuable insights into how the brain’s hemispheres are coordinated, and how we may use experimental techniques to manipulate this” adds senior author Alexis-Hervais Adelman.

The findings, to be published in PNAS, may also have clinical implications. “We know that disturbances of interhemispheric connectivity occur in auditory ‘phantom’ perceptions, such as tinnitus and auditory verbal hallucinations,” Preisig explains. “Therefore, stimulating the two hemispheres with (HD-)TACS may offer therapeutic benefits. I will follow up on this research by applying TACS in patients with hearing loss and tinnitus, to improve our understanding of neural attention control and to enhance speech comprehension for this group.”

(A). Whole-brain analysis of task-evoked activity showing BOLD signal changes associated with auditory speech perception, either with or without TACS stimulation. (B) BOLD modulation induced by TACS 180° (Δ TACS — sham) was significantly larger than the modulation induced by TACS 0° (Δ TACS — sham). (C )The influence of TACS on local brain activity. In the center, overview of the ROIs from all participants that were used for the ROI analysis and later connectivity analyses. ROIs comprised the left and the right HG (in yellow) and the left and right pSTS (in black). In the periphery, participants’ mean activation within the ROI is shown for each stimulation condition (TACS 0°, TACS 180°) relative to sham. Dots represent the data points of single participants. Bars and error bars represent mean ± SEM across participants. Our results indicate that the BOLD signal modulation induced by TACS 180° (Δ TACS — sham) is larger than the modulation induced by TACS 0° (Δ TACS — sham).

Morality is in the eye of the beholder: the neurocognitive basis of the “anomalous‐is‐bad” stereotype

by Clifford I. Workman, Stacey Humphries, Franziska Hartung, Geoffrey K. Aguirre, Joseph W. Kable, Anjan Chatterjee in Annals of the New York Academy of Sciences

The “scarred villain” is one of the oldest tropes in film and literature, from Scar in “The Lion King” to Star Wars’ Darth Vader and the Joker in “The Dark Knight.” The trope is likely rooted in a long-evolved human bias against facial anomalies — atypical features such as growths, swelling, facial paralysis, and scars. A new brain-and-behavior study from researchers in the Perelman School of Medicine at the University of Pennsylvania illuminates this bias on multiple levels.

The researchers, used surveys, social simulations, and functional MRI (fMRI) studies to study hundreds of participants’ responses and attitudes towards attractive, average, and anomalous faces. The findings clarify how the “anomalous-is-bad” stereotype manifests, and implicate a brain region called the amygdala as one of the likely mediators of this stereotype.

“Understanding the psychology of the ‘anomalous-is-bad’ stereotype can help, for example, in the design of interventions to educate the public about the social burdens shouldered by people who look different,” said lead author Clifford Workman, PhD, a postdoctoral researcher in the Penn Center for Neuroaesthetics. The center is led by Anjan Chatterjee, MD, a professor of Neurology at Penn Medicine, who was senior author of the study.

Bias against people with facial disfigurements has been demonstrated in various prior studies. Researchers broadly assume that this bias reflects ancient adaptive traits which evolved to promote healthy mate selection, for example, and to steer us clear of people who have potentially communicable diseases. Regardless the cause, for many people, their facial anomalies render them unjust targets of discrimination.

In their study, Workman and colleagues investigated how this bias manifests at different levels, from expressed attitudes towards faces, to actual behavior during simulated social interactions, and even down to brain responses when viewing faces.

In one part of the study, the researchers showed a set of faces that were either average-looking, attractive, or anomalous to 403 participants from an online panel, and asked them to rate the depicted people on various measures. The researchers found that, compared to more attractive faces, participants considered anomalous faces less trustworthy, less content, and more anxious, on average. The anomalous faces also made the participants feel less happy. Participants also acknowledged harboring “explicit bias” reflected in negative expectations about people with anomalous faces as a group.

In the other part of the study, Workman and colleagues examined moral attitudes and dispositions, the behavior during simulated social interaction, and fMRI-measured brain responses, for 27 participants who viewed similar sets of faces.

Here again there was some evidence of the anomalous-is-bad habit of thinking, though it was not clear that this translated into mistreatment of people with anomalous faces. For example, in a simulated donation game measuring pro-sociality — the willingness to be positive and helpful towards another — the participants were not significantly less pro-social towards anomalous-looking people. However, participants in the highest tier of socioeconomic status, compared to the others, were significantly less pro-social towards anomalous-looking people.

On fMRI scans, brain regions called the amygdala and the fusiform gyri showed significant neural responses specifically to anomalous faces. Activity in a portion of the left amygdala, which correlated with less pro-sociality towards anomalous faces, also seemed related to participants’ beliefs about justice in the world and their degree of empathic concern.

“We hypothesize that the left amygdala integrates face perception with moral emotions and social values to guide behavior, such that weaker emotional empathy, and a stronger belief that the world is just, both facilitate dehumanizing people with facial anomalies,” Chatterjee said.

Analyzing such responses is inherently challenging, because they involve a mix of subjective perceptions, such as the “visual salience,” or relative importance, of a face, and the “emotional arousal” elicited by seeing the face. To inform future research, as part of the study, the team used the fMRI data to clarify which brain regions are associated with these distinct aspects of the experience of seeing faces.

A visual overview of the research design. This overview delineates between the two studies reported here; the levels of organization they investigated; the measures that were examined, including fMRI contrasts; and where the corresponding results — figures and tables, specifically — are located. The inset panel contains an overview of the fMRI masking procedure. Red boxes distinguish fMRI analyses used for exclusive masking, whereas green boxes signify inclusive masking.

Brain health across the entire glycaemic spectrum: the UK Biobank

by Victoria Garfield, Aliki‐Eleni Farmaki, Sophie V. Eastwood, Rohini Mathur, Christopher T. Rentsch, Krishnan Bhaskaran, Liam Smeeth, Nish Chaturvedi in Diabetes, Obesity and Metabolism

For the study, researchers analysed data from the UK Biobank of 500,000 people aged 58 years on average, and found that people with higher than normal blood sugar levels were 42% more likely to experience cognitive decline over an average of four years, and were 54% more likely to develop vascular dementia over an average of eight years (although absolute rates of both cognitive decline and dementia were low).

The associations remained true after other influential factors had been taken into account — including age, deprivation, smoking, BMI and whether or not participants had cardiovascular disease.

People with prediabetes have blood sugar levels that are higher than usual, but not high enough to be diagnosed with type 2 diabetes. It means they are more at risk of developing diabetes. There are an estimated five to seven million people* with prediabetes in the UK.

Lead author Dr Victoria Garfield (UCL Institute of Cardiovascular Science and the UCL MRC Unit for Lifelong Health & Ageing) said: “Our research shows a possible link between higher blood sugar levels — a state often described as ‘prediabetes’ — and higher risks of cognitive decline and vascular dementia. As an observational study, it cannot prove higher blood sugar levels cause worsening brain health. However, we believe there is a potential connection that needs to be investigated further.

“Previous research has found a link between poorer cognitive outcomes and diabetes but our study is the first to investigate how having blood sugar levels that are relatively high — but do not yet constitute diabetes — may affect our brain health.”

In the study, researchers investigated how different blood sugar levels, or glycaemic states, were associated with performance in cognitive tests over time, dementia diagnoses, and brain structure measured by MRI scans of the brain. Each of these measures were limited to smaller subsets of the Biobank sample (for instance, only 18,809 participants had follow-up cognitive tests).

At recruitment all of the UK Biobank participants underwent an HbA1c test, which determines average blood sugar levels over the past two to three months. Participants were divided into five groups on the basis of the results — “low-normal” level of blood sugar, normoglycaemia (having a normal concentration of sugar in the blood), prediabetes, undiagnosed diabetes and diabetes. A result between 42–48 mmol/mol (6.0–6.5%) was classified as prediabetes.

The researchers used data from repeated assessments of visual memory to determine whether participants had cognitive decline or not. Though absolute rates of cognitive decline were low, people with prediabetes and diabetes had a similarly higher likelihood of cognitive decline — 42% and 39% respectively.

Looking at dementia diagnoses, researchers found that prediabetes was associated with a higher likelihood of vascular dementia, a common form of dementia caused by reduced blood flow to the brain, but not Alzheimer’s disease. People with diabetes, meanwhile, were three times more likely to develop vascular dementia than people whose blood sugar levels were classified as normal, and more likely to develop Alzheimer’s disease.

Senior author Professor Nishi Chaturvedi (UCL MRC Unit for Lifelong Health & Ageing) said: “In this relatively young age group, the risks of cognitive decline and of dementia are very low; the excess risks we observe in relation to elevated blood sugar only modestly increase the absolute rates of ill health. Seeing whether these effects persist as people get older, and where absolute rates of disease get higher, will be important.

“Our findings also need to be replicated using other datasets. If they are confirmed, they open up questions about the potential benefits of screening for diabetes in the general population and whether we should be intervening earlier.”

Among 35,418 participants of the UK Biobank study who underwent MRI brain scans, researchers found that prediabetes was associated somewhat with a smaller hippocampus and more strongly associated with having lesions on the brain (white matter hyperintensities, WMHs) — both associated with age-related cognitive impairment.

The researchers said that some of these differences could be explained by elevated blood pressure, as those participants taking antihypertensive medication were likely to have more WMHs and smaller hippocampal volume. Rather than the treatment having an adverse effect on the brain, the researchers said use of such medication might be an indicator of earlier untreated high blood pressure.

People with prediabetes can reduce their risk of developing type 2 diabetes by eating a healthy, balanced diet, being more active, and staying at a healthy weight.

Food-seeking behavior is mediated by Fos-expressing neuronal ensembles formed at first learning in rats

by Richard Quintana-Feliciano, Christina Gobin, Louisa Kane, Bo Sortman, Samantha Rakela, Ariana Genovese, Brendan Tunstall, Daniele Caprioli, Sergio Iniguez, Brandon L Warren in eneuro

Science is a step closer to a new response to obesity, thanks in part to a study conducted by a team that included Sergio Iñiguez, Ph.D., associate professor of psychology at The University of Texas at El Paso.

The 10-member team led by Brandon Warren, Ph.D., assistant professor of pharmacodynamics at the University of Florida, made discoveries about a specific area of the brain tied to recollection and the desire to seek and consume food. It could lead to a way to inhibit the desire to overeat.

Iñiguez, who directs UTEP’s Iñiguez Behavioral Neuroscience Lab and helped design novel experimental techniques for the research, said that people tend to overeat when exposed to cues or environments that remind them of treats, which is one reason why people opt for dessert even after a filling meal. The study showed that neurons in a specific part of the brain control the link between the cue (seeing the dessert) and the action (ordering the dessert). Iñiguez and team found that animal subjects consumed fewer treats when they regulated that region of the animal’s brain.

The techniques and the data eventually could help overcome some issues linked to obesity such as stroke, Type 2 diabetes, high blood pressure, high levels of bad cholesterol, and coronary heart disease.

“This is a big discovery because we now have experimental tools that allow us to turn off neurons while the subjects engage in a specific behavior,” Iñiguez said. “This research shows that a specific part of the prefrontal cortex of the brain is important for the initial stages of learning to seek food.”

Posture, Gait, Quality of Life, and Hearing with a Vestibular Implant

by Margaret R. Chow, Andrianna I. Ayiotis, Desi P. Schoo, Yoav Gimmon, Kelly E. Lane, Brian J. Morris, Mehdi A. Rahman, Nicolas S. Valentin, Peter J. Boutros, Stephen P. Bowditch, Bryan K. Ward, Daniel Q. Sun, Carolina Treviño Guajardo, Michael C. Schubert, John P. Carey, Charles C. Della Santina in New England Journal of Medicine

Getting around without the need to concentrate on every step is something most of us can take for granted because our inner ears drive reflexes that make maintaining balance automatic. However, for about 1.8 million adults worldwide with bilateral vestibular hypofunction (BVH) — loss of the inner ears’ sense of balance — walking requires constant attention to avoid a fall. Now, Johns Hopkins Medicine researchers have shown that they can facilitate walking, relieve dizziness and improve quality of life in patients with BVH by surgically implanting a stimulator that electrically bypasses malfunctioning areas of the inner ear and partially restores the sensation of balance.

To maintain balance while moving through the world around us, our brains receive and process data from multiple sensory systems, including vision, proprioception (muscles and joints) and vestibular sensation from the inner ears. People with BVH have difficulty keeping their eyes, head and body steady. Head movements make their vision jump and blur, and walking requires conscious effort. Forced to deal with this mental distraction, individuals with BVH suffer a more than thirtyfold increase in fall risk and the social stigma of appearing to walk like someone who’s intoxicated.

Current therapy for BVH is limited to vestibular rehabilitation exercises. Doctors advise their patients with BVH to avoid medications that damage the inner ear (ototoxic drugs) or suppress brain function (sedatives), and caution them to steer clear of activities that might endanger them or others, such as driving, swimming and walking in poorly lit areas.

“Although about 20 individuals had been implanted elsewhere with devices used to stimulate the vestibular nerve in a laboratory setting, participants in this trial are true pioneers — the first to use a vestibular implant as a long-term, 24-hour-per-day sensory restoration treatment,” says senior study author Charley Della Santina, M.D., Ph.D., professor of otolaryngology-head and neck surgery and biomedical engineering at the Johns Hopkins University School of Medicine and director of the Johns Hopkins Vestibular NeuroEngineering Laboratory, which conducted the study.

To achieve this milestone, Della Santina and his colleagues used basic research and engineering technology to modify a cochlear implant — a device that improves hearing loss by electrically stimulating the inner ear’s cochlear nerve — to instead activate the nearby vestibular nerve in response to signals from a motion sensor on the patient’s head. Electrical pulse strength and timing convey information about the speed and direction of the patient’s head motion which, in turn, drives head and eye reflexes that help maintain clearer vision during head movement and reduce the need to exert conscious effort to avoid falls.

In their study, the Johns Hopkins Medicine researchers evaluated eight patients with BVH who received the vestibular implant, assessing changes in postural stability, walking, hearing and patient-reported outcomes, including dizziness and quality of life. Assessments were conducted before implantation surgery (the baseline measure) and at six months and one year afterward. Median scores improved for the group on four of the five posture and gait metrics, and on three of the four patient-reported outcomes.

All eight patients experienced some hearing loss in the implanted ear. Five maintained hearing in the implanted ear sufficient to use a telephone without a hearing aid, and three experienced greater hearing loss.

“Improvement in performance on standardized clinical tests of balance and walking has been remarkable,” says Margaret Chow, study lead author and biomedical engineering doctoral candidate at The Johns Hopkins University. “Even more gratifying is that our patients have been able to return to activities that enrich their daily lives, such as exercising, riding a bike, gardening or dancing at a daughter’s wedding.”

Overall, the improvement in quality of life and relief from the misery of BVH has been life altering, says A’ndrea Messer, Ph.D., one of the patients chronicled in the Johns Hopkins Medicine study and a senior science and research information officer at Penn State University.

“The multichannel vestibular implant is incredible,” says Messer. “Before receiving it, I couldn’t walk in the dark, on uneven ground or without a cane. Now, I can do all of those things and am living a fairly normal life.”

Components and Mechanism of a Vestibular Implant.

Investigation of Risperidone Treatment Associated With Enhanced Brain Activity in Patients Who Stutter

by Gerald A. Maguire, Bo Ram Yoo, Shahriar SheikhBahaei in Frontiers in Neuroscience

Astrocytes — star-shaped cells in the brain that are actively involved in brain function — may play an important role in stuttering, a study led by a University of California, Riverside, expert on stuttering has found.

“Our study suggests that treatment with the medication risperidone leads to increased activity of the striatum in persons who stutter,” said Dr. Gerald A. Maguire, professor and chair of the Department of Psychiatry and Neuroscience at the UCR School of Medicine, who led the study. “The mechanism of risperidone’s action in stuttering, in part, appears to involve increased metabolism — or activity — of astrocytes in the striatum.”

Findings from the study were borne from a collaboration between Maguire and Shahriar SheikhBahaei, an independent research scholar at the National Institutes of Health’s National Institute of Neurological Disorders and Stroke.

The striatum is a key component of the basal ganglia, a group of nuclei best known for facilitating voluntary movement. Present in the forebrain, the striatum contains neuronal activity related to cognition, reward, and coordinated movements.

Stuttering, a childhood onset fluency disorder that leads to speech impairment, is associated with high levels of the neurotransmitter dopamine. Risperidone works by blocking the receptors in the brain that dopamine acts on, thus preventing excessive dopamine activity. Risperidone is available by prescription under a physician’s order almost anywhere in the world. In existence for nearly 30 years, it is generally prescribed for schizophrenia and bipolar disorder.

Maguire and SheikhBahaei have now found evidence that astrocytes in the striatum may be crucially involved in how risperidone is able to reduce stuttering.

“We do not know the exact mechanism for how risperidone activates astrocytes in the striatum,” said coauthor SheikhBahaei, an expert on astrocytes, and a person who stutters. “What we know is that it activates astrocytes. The astrocytes then release a signaling molecule that affects neurons in the striatum by blocking their dopamine receptors. In our future work, we would like to find this signaling molecule and better understand the exact role astrocytes play in stuttering, which, in turn, could help us design drugs that target astrocytes.”

Maguire and his team conducted a randomized, double-blinded placebo-controlled clinical trial with 10 adult subjects to observe risperidone’s effects on brain metabolism. At the start of the study and after six weeks of taking risperidone (0.5–2.0 mg/day) or a placebo pill, the 10 participants were assigned to a solo reading aloud task. The participants then each underwent a positron emission tomography, or PET, scan. It turned out that five subjects got risperidone while the other five got a placebo. Those in the risperidone treatment group were found to show higher glucose uptake — that is, higher metabolism — in specific regions of the brain according to scans taken after active treatment.

“Naturally, and abnormally, glucose uptake is low in stuttering — a feature common to many neurodevelopmental conditions,” said Maguire, who also is a person who stutters. “But risperidone seems to compensate for the deficit by increasing the metabolism, specifically, in the left striatum. More research is needed to understand this better. Neuroimaging techniques we used to visualize changes in the brains of those who stutter can provide valuable insights into the pathophysiology of the disorder and guide the development of future interventions.”

Next, Maguire and SheikhBahaei will aim to further understand what causes stuttering, what the different types of stuttering are, what may be their etiologies; and develop targeted personalized treatments for those who stutter.

“The general goal of our research collaboration is to combine basic research in my lab with Dr. Maguire’s clinical studies,” SheikhBahaei said. “My lab is generating new animal models to study stuttering which will help us understand what causes different types of stuttering. Researchers have proposed other components are involved in stuttering’s etiology. Our data, which suggests astrocytes in the striatum may be playing an important role in the development of stuttering, helps unify some of the findings the scientific literature has seen recently on astrocytes and could help connect the dots.”

Personally speaking

The UCR School of Medicine has signed a research collaborative agreement with the National Institute of Neurological Disorders and Stroke to work together on research related to stuttering.

“I have been active in the stuttering community for decades,” Maguire said. “This is a community that needs support, opportunities, and role models. Dr. SheikhBahaei and I encourage people who stutter to be more engaged in the scientific community. We both stutter and that has not stopped us from achieving our professional and personal goals. Young people who stutter and are thinking about careers in science and medicine should not let this speech disorder hold them back.”

For SheikhBahaei, working with Maguire is an ideal collaboration to “bring bench to the bedside.”

“We are working to reveal circuits in the brain that control the complex behavior of speaking,” he said. “These circuits will shed more light on the mechanism involved in stuttering. Speaking may be the most complex human behavior. Consider that more than 100 muscles in the body must act in synchrony for us to speak.”

Within group analysis FDG PET scan of five adult subjects who stutter scanned before treatment (off risperidone) and after treatment for 6 weeks (on risperidone) taken after a 30-min solo reading aloud task. Subjects on risperidone received 0.5–2.0 mg based on tolerability. The acquisition of the FDG during the reading aloud task was 30 min and the scan acquisition time was 90 min. Images were anatomically normalized using the coordinate system of the Talairach atlas (Friston et al., 1991b). Areas of increased (red) or decreased (blue) metabolism were identified using an overlay of Brodmann defined regions from MRIs (Wu et al., 1995). All regions of the brain were examined, with threshold differences of p < 0.05 identified for the caudate, putamen, and Broca’s area (the mg/100 g/min scale represents the uptake rate of the isotope).

Geometric models reveal behavioural and neural signatures of transforming experiences into memories

by Andrew C. Heusser, Paxton C. Fitzpatrick, Jeremy R. Manning in Nature Human Behaviour

Your brain is constantly evaluating which aspects of your experiences to either remember for later, ignore, or forget. Dartmouth researchers have developed a new approach for studying these aspects of memory, by creating a computer program that turns sequences of events from a video into unique geometric shapes. These shapes can then be compared to the shapes of how people recounted the events. The study provides new insight into how experiences are committed to memory and recounted to others. The results were based on how people remembered the experience of watching an episode of Sherlock, a BBC television show.

“When we represent experiences and memories as shapes, we can use the tools provided by the field of geometry to explore how we remember our experiences, and to test theories of how we think, learn, remember, and communicate,” explained senior author Jeremy R. Manning, an assistant professor of psychological and brain sciences, and director of the Contextual Dynamics Lab at Dartmouth. “When you experience something, its shape is like a fingerprint that reflects its unique meaning, and how you remember or conceptualize that experience can be turned into another shape. We can think of our memories like distorted versions of our original experiences. Through our research, we wanted to find out when and where those distortions happen (i.e. what do people get right and what do people get wrong), and examine how accurate our memories of experiences are,” he added.

The Dartmouth research team examined a public dataset containing brain recordings from 17 people who had watched the Sherlock episode and then described what had happened in their own words. The dataset also contained detailed scene-by-scene annotations of the episode. The team ran those annotations through their computer program to identify 32 unique topics or themes that were present in each moment of the episode. Through computer modeling, the researchers then created a “topic model” of the episode, which was comprised of 32 dimensions to reflect each thematic topic. Different moments of the episode that reflected similar themes were assigned to nearby locations in the 32-dimensional space. When these results are visualized in 2D, a connect-the-dots-like representation of successive events emerges. The shape of that representation reflects how the thematic content of the episode changes over time, and how different moments are related. The researchers used an analogous process to obtain the shapes of how each of the 17 participants recounted the events of the episode.

When the geometric shapes representing the Sherlock episode were compared to the shapes representing a participant’s recounting of it, the researchers were able to identify which aspects of the episode people tended to remember accurately, forget or distort. The coarse spatial structure of the episode’s shape reflects the major plot points and acts like a building’s scaffolding. The shape of every participant’s recounting reproduced this coarse-scale scaffolding, indicating that every participant accurately remembered the major plot points. The episode’s shape also comprises finer-scale structure, analogous to architectural embellishments and decorations, that reflected specific low-level conceptual details. Some participants accurately recounted many of those low-level details, whereas others recounted only the high-level plot points.

“One of our most intriguing findings was that, as people were watching the episode, we could use their brain activity patterns to predict the distorted shapes that their memories would take on when they recounted it later,” explained Manning. “This suggests that some of the details about our ongoing experiences get distorted in our brains from the moment they are stored as new memories. Even when two people experience the same physical event, their subjective experiences of that event start to diverge from the moment their brains start to make sense of what happened and distill that event into memories.”

The research team plans to apply their approach to other domains, including in health and education, as their methods of modeling the shapes of memories could be used to provide a more nuanced way of assessing if a patient will understand or remember what their doctor is telling them, or whether a student understands specific concepts in a course lecture.

Histone lysine methyltransferase Pr‐set7/SETD8 promotes neural stem cell reactivation

by Jiawen Huang, Mahekta R Gujar, Qiannan Deng, Sook Y Chia, Song Li, Patrick Tan, Wing‐Kin Sung, Hongyan Wang in EMBO reports

Researchers studying an enzyme in fruit fly larvae have found that it plays an important role in waking up brain stem cells from their dormant ‘quiescent’ state, enabling them to proliferate and generate new neurons. Published in the journal EMBO Reports, the study by Duke-NUS Medical School, Singapore, could help clarify how some neurodevelopmental disorders such as autism and microcephaly occur.

Quiescent neural stem cells in the fruit fly larval brainPr-set7 is an enzyme involved in maintaining genome stability, DNA repair and cell cycle regulation, as well as turning various genes on or off. This protein, which goes by a few different names, has remained largely unchanged as species have evolved. Professor Wang Hongyan, a professor and deputy director at Duke-NUS’ Neuroscience and Behavioural Disorders Programme, and her colleagues set out to understand the protein’s function during brain development.

“Genetic variants of the human version of Pr-set7 are associated with neurodevelopmental disorders, with typical symptoms including intellectual disability, seizures and developmental delay,” explained Professor Wang. “Our study is the first to show that Pr-set7 promotes neural stem cell reactivation and, therefore, plays an important role in brain development.”

Neural stem cells normally oscillate between states of quiescence and proliferation. Maintaining an equilibrium between the two is very important. Most neural stem cells are quiescent in adult mammalian brains. They are reactivated to generate new neurons in response to stimuli, such as injury, the presence of nutrients or exercise. However, neural stem cells gradually lose their capacity to proliferate with age and in response to stress, and anxiety.

Professor Wang and her colleagues studied what happened when the gene coding for Pr-set7 is turned off in larval fruit fly brains. They found it caused a delay in the reactivation of neural stem cells from their quiescent state. To reactivate neural stem cells, Pr-set7 needs to turn on at least two genes: cyclin-dependent kinase 1 (cdk1) and earthbound 1 (Ebd1). The scientists found that overexpressing the proteins coded by these genes led to the reactivation of neural stem cells even when the Pr-set7 gene was turned off. These findings show that Pr-set7 binds to the cdk1 and Ebd1 genes to activate a signalling pathway that reactivates neural stem cells from their quiescent state.

“Since Pr-set7 is conserved across species, our findings could contribute to the understanding of the roles of its mammalian counterpart in neural stem cell proliferation and its associated neurodevelopmental disorders,” said Prof Wang.

Professor Patrick Casey, Senior Vice-Dean for Research at Duke-NUS, commented: “With this latest study, Professor Wang’s fundamental research in neuroscience has yielded valuable insights into several neurodevelopmental disorders; insights that have the potential to improve the way we care for people with such disorders.”

The scientists are now extrapolating this study to understand the roles of the mammalian and human forms of Pr-set7, called SETD8 and KMT5A respectively, in brain development.

Histone monomethyl transferase Pr‐set7 is required for Drosophila neural stem cells to exit from quiescence. Pr‐set7 promotes stem cell reactivation by upregulating the cell cycle regulator Cdk1 and the Wnt pathway transcriptional co‐activator Ebd1.

  • Histone monomethyl transferase Pr‐set7 promotes cell cycle re‐entry of Drosophila neural stem cells from quiescence.
  • Pr‐set7 binds to the promoter region of Cdk1 and the Wnt pathway transcriptional co‐activator Ebd1 in neural stem cells.
  • Pr‐set7 functions upstream of cdk1 and ebd1 to promote neural stem cell reactivation.

Early pripheral activity alters nascent subplate circuits in the auditory cortex

by Xiangying Meng, Didhiti Mukherjee, Joseph P. Y. Kao, Patrick O. Kanold in Science Advances

Scientists have yet to answer the age-old question of whether or how sound shapes the minds of fetuses in the womb, and expectant mothers often wonder about the benefits of such activities as playing music during pregnancy. Now, in experiments in newborn mice, scientists at Johns Hopkins report that sounds appear to change “wiring” patterns in areas of the brain that process sound earlier than scientists assumed and even before the ear canal opens.

The current experiments involve newborn mice, which have ear canals that open 11 days after birth. In human fetuses, the ear canal opens prenatally, at about 20 weeks gestation.

The findings, may eventually help scientists identify ways to detect and intervene in abnormal wiring in the brain that may cause hearing or other sensory problems.

“As scientists, we are looking for answers to basic questions about how we become who we are,” says Patrick Kanold, Ph.D., professor of biomedical engineering at The Johns Hopkins University and School of Medicine. “Specifically, I am looking at how our sensory environment shapes us and how early in fetal development this starts happening.”

Kanold started his career in electrical engineering, working with microprocessors, a natural conduit for his shift to science and studying the circuitry of the brain.

His research focus is the outermost part of the brain, the cortex, which is responsible for many functions, including sensory perception. Below the cortex is the white brain matter that in adults contains connections between neurons.

In development, the white matter also contains so-called subplate neurons, some of the first to develop in the brain — at about 12 weeks gestation for humans and the second embryonic week in mice. Anatomist Mark Molliver of Johns Hopkins is credited with describing some of the first connections between neurons formed in white matter, and he coined the term subplate neurons in 1973.

These primordial subplate neurons eventually die off during development in mammals, including mice. In humans, this happens shortly before birth through the first few months of life. But before they die off, they make connections between a key gateway in the brain for all sensory information, the thalamus, and the middle layers of the cortex.

“The thalamus is the intermediary of information from the eyes, ears and skin into the cortex,” says Kanold. “When things go wrong in the thalamus or its connections with the cortex, neurodevelopmental problems occur.” In adults, the neurons in the thalamus stretch out and project long, armlike structures called axons to the middle layers of the cortex, but in fetal development, subplate neurons sit between the thalamus and cortex, acting as a bridge. At the end of the axons is a nexus for communication between neurons called synapses. Working in ferrets and mice, Kanold previously mapped the circuitry of subplate neurons. Kanold also previously found that subplate neurons can receive electrical signals related to sound before any other cortical neurons did.

The current research, which Kanold began at his previous position at the University of Maryland, addresses two questions, he says: When sound signals get to the subplate neurons, does anything happen, and can a change in sound signals change the brain circuits at these young ages?

First, the scientists used genetically engineered mice that lack a protein on hair cells in the inner ear. The protein is integral for transforming sound into an electric pulse that goes to the brain; from there it is translated into our perception of sound. Without the protein, the brain does not get the signal.

In the deaf, 1-week-old mice, the researchers saw about 25% — 30% more connections among subplate neurons and other cortex neurons, compared with 1-week-old mice with normal hearing and raised in a normal environment. This suggests that sounds can change brain circuits at a very young age, says Kanold.

In addition, say the researchers, these changes in neural connections were happening about a week earlier than typically seen. Scientists had previously assumed that sensory experience can only alter cortical circuits after neurons in the thalamus reach out to and activate the middle layers of the cortex, which in mice is around the time when their ear canals open (at around 11 days).

“When neurons are deprived of input, such as sound, the neurons reach out to find other neurons, possibly to compensate for the lack of sound,” says Kanold. “This is happening a week earlier than we thought it would, and tells us that the lack of sound likely reorganizes connections in the immature cortex.”

In the same way that lack of sound influences brain connections, the scientists thought it was possible that extra sounds could influence early neuron connections in normal hearing mice, as well.

To test this, the scientists put normal hearing, 2-day-old mouse pups in a quiet enclosure with a speaker that sounds a beep or in a quiet enclosure without a speaker. The scientists found that the mouse pups in the quiet enclosure without the beeping sound had stronger connections between subplate and cortical neurons than in the enclosure with the beeping sound. However, the difference between the mice housed in the beeping and quiet enclosures was not as large as between the deaf mice and ones raised in a normal sound environment.

These mice also had more diversity among the types of neural circuits that developed between the subplate and cortical neurons, compared with normal hearing mouse pups raised in a quiet enclosure with no sound. The normal hearing mice raised in the quiet enclosure also had neuron connectivity in the subplate and cortex regions similar to that of the genetically-engineered deaf mice.

“In these mice we see that the difference in early sound experience leaves a trace in the brain, and this exposure to sound may be important for neurodevelopment,” says Kanold.

The research team is planning additional studies to determine how early exposure to sound impacts the brain later in development. Ultimately, they hope to understand how sound exposure in the womb may be important in human development and how to account for these circuit changes when fitting cochlear implants in children born deaf. They also plan to study brain signatures of premature infants and develop biomarkers for problems involving miswiring of subplate neurons.

Mouse ACtx shows spontaneous and sound-evoked activity before ear opening.

(A) Timeline of cortical development in various species. Mouse ears open at ~P10. (B) Experimental setup and fluorescence image of activity. (С )Left: Filled areas indicate active ROIs that showed responses to sound. Colors indicate mean ΔF/F of active ROIs in a 2-s window after tone onset. Right: Exemplar fluorescence time courses of two single active ROIs during 8-kHz tone presentation. (D) Left: Bar graphs showing numbers of active ROIs at both ages (means ± SD; P8 to P9: 13.67 ± 7.5, P13 to P15: 24.2 ± 5.4; P > 0.05). Right: CDFs (cumulative distribution functions) showing that mean ΔF/F of active ROIs in a 2-s window after tone onset is higher at P8 to P9 (dashed, median: 10.2%) than that at P13 to P15 (solid, median: 8.5%; P < 0.001). (E) Spontaneous activity shows low (L)– and high (H)–synchronization events. (F) Inter-event interval and peak amplitudes of L- and H-events of spontaneous and sound-evoked activity over all the repeats and sound stimuli for all ROIs at P8 to P9 (dashed) and P13 to P15 (solid). Inter-event interval of H-events is longer and that of L-events is shorter at P13 to P15 than those at P8 to P9, respectively (H-event medians, P8 to P9: 18.7 s, P13 to P15: 44.5 s; L-event medians, P8 to P9: 20.6 s, P13 to P15: 8.9 s; P < 0.001 for both). The peak amplitudes of both H and L spontaneous events are higher at P8 to P9 (H-event medians, P8 to P9: 92.4%, P13 to P15: 56%; L-event medians, P8 to P9: 32.7%, P13 to P15: 31.2%; P < 0.001 for both), and those after sound onset are also higher at P8 to P9 (H-event medians, P8 to P9: 72.8%, P13 to P15: 57%; L-event medians, P8 to P9: 27.5%, P13 to P15: 21.4%; P < 0.001 for both).

Beneficial effects of choir singing on cognition and well-being of older adults: Evidence from a cross-sectional study

by Emmi Pentikäinen, Anni Pitkäniemi, Sini-Tuuli Siponkoski, Maarit Jansson, Jukka Louhivuori, Julene K. Johnson, Teemu Paajanen, Teppo Särkämö in PLOS ONE

Alongside the effects of lifestyle, including physical exercise and diet, on ageing, research has increasingly turned its attention to the potential cognitive benefits of musical hobbies. However, such research has mainly concentrated on hobbies involving musical instruments.

The cognitive benefits of playing an instrument are already fairly well known: such activity can improve cognitive flexibility, or the ability to regulate and switch focus between different thought processes. However, the cognitive benefits of choir singing have so far been investigated very little.

Now, a study provides evidence according to which choir singing may engender benefits similar to playing an instrument.

The results show that elderly singers had better verbal flexibility than those in the control group, who did not have choir singing as a hobby. Verbal flexibility reflects better cognitive flexibility.

“This supports findings previously gained on the effects of playing an instrument on the cognitive functioning of elderly people and gives some indications that choir singing too may potentially have similar beneficial effects. These findings increase our understanding of how different activities can shape cognition later in life, too,” says doctoral student Emmi Pentikäinen.

Those with a longer history of singing in a choir experience a greater feeling of togetherness

The study also looked into the potential benefits of choir singing for the emotional and social wellbeing of the elderly. Questionnaires used in the study demonstrated that those who had sung in a choir for a longer period, more than 10 years, felt greater social togetherness than those with less or no experience of choir singing.

Furthermore, study subjects who had started choir singing less than 10 years ago were happier with their overall health than those with longer singing experience and those who did not sing in a choir.

“It’s possible that the people who have joined a choir later in life have thus found the motivation to maintain their health by adhering to an active and healthy lifestyle. Then again, the relationships and social networks provided by being in a choir among those who have done it for longer may have become established as an integral part of their lives, therefore appearing as a greater feeling of social togetherness,” Pentikäinen assesses.

Choir singing requires versatile information processing

Ageing brings with it changes to the cognitive functioning as well as the physical and social environment of individuals, all of which have an impact on their wellbeing.

As the population ages, it is becoming increasingly important to identify ways of improving the wellbeing and quality of life of older adults.

According to Pentikäinen, choir singing provides a good opportunity to support the wellbeing of the elderly, as it requires flexible executive function and the regulation of attention.

“Choir singing is easy to do in practice, with little cost. It’s an activity that requires versatile information processing, as it combines the processing of diverse sensory stimuli, motor function related to voice production and control, linguistic output, learning and memorising melodies and lyrics, as well as emotions roused by the pieces sung,” she notes.

The coronavirus pandemic too has demonstrated the significance of music and singing to people’s lives.

“People have been singing together on balconies and from open windows to lift their mood.”

Real-time dialogue between experimenters and dreamers during REM sleep

by Konkoly et al. in Current Biology

Dreams take us to what feels like a different reality. They also happen while we’re fast asleep. So, you might not expect that a person in the midst of a vivid dream would be able to perceive questions and provide answers to them. But a new study shows that, in fact, they can.

“We found that individuals in REM sleep can interact with an experimenter and engage in real-time communication,” said senior author Ken Paller of Northwestern University. “We also showed that dreamers are capable of comprehending questions, engaging in working-memory operations, and producing answers.

“Most people might predict that this would not be possible — that people would either wake up when asked a question or fail to answer, and certainly not comprehend a question without misconstruing it.”

While dreams are a common experience, scientists still haven’t adequately explained them. Relying on a person’s recounting of dreams is also fraught with distortions and forgotten details. So, Paller and colleagues decided to attempt communication with people during lucid dreams.

“Our experimental goal is akin to finding a way to talk with an astronaut who is on another world, but in this case the world is entirely fabricated on the basis of memories stored in the brain,” the researchers write. They realized finding a means to communicate could open the door in future investigations to learn more about dreams, memory, and how memory storage depends on sleep, the researchers say.

The researchers studied 36 people who aimed to have a lucid dream, in which a person is aware they’re dreaming. The paper is unusual in that it includes four independently conducted experiments using different approaches to achieve a similar goal. In addition to the group at Northwestern University in the U.S., one group conducted studies at Sorbonne University in France, one at Osnabruck University in Germany, and one at Radboud University Medical Center in the Netherlands.

“We put the results together because we felt that the combination of results from four different labs using different approaches most convincingly attests to the reality of this phenomenon of two-way communication,” said Karen Konkoly, a PhD student at Northwestern University and first author of the paper. “In this way, we see that different means can be used to communicate.”

One of the individuals who readily succeeded with two-way communication had narcolepsy and frequent lucid dreams. Among the others, some had lots of experience in lucid dreaming and others did not. Overall, the researchers found that it was possible for people while dreaming to follow instructions, do simple math, answer yes-or-no questions, or tell the difference between different sensory stimuli. They could respond using eye movements or by contracting facial muscles. The researchers refer to it as “interactive dreaming.”

Konkoly says that future studies of dreaming could use these same methods to assess cognitive abilities during dreams versus wake. They also could help verify the accuracy of post-awakening dream reports. Outside of the laboratory, the methods could be used to help people in various ways, such as solving problems during sleep or offering nightmare sufferers novel ways to cope. Follow-up experiments run by members of the four research teams aim to learn more about connections between sleep and memory processing, and about how dreams may shed light on this memory processing.

MISC

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Nature Neuroscience

Science Daily

Technology Networks

Frontiers

Cell

--

--