NT/ Tiny implantable tool for light-sheet imaging of brain activity

Paradigm
Paradigm
Published in
40 min readMay 1, 2021

Neuroscience biweekly vol. 31, 16th April — 1st May

TL;DR

Neuroscience market

The global neuroscience market size was valued at USD 28.4 billion in 2016 and it is expected to reach USD 38.9 billion by 2027.

Latest news and researches

Implantable photonic neural probes for light-sheet fluorescence brain imaging

by Wesley D. Sacher, Fu-Der Chen, Homeira Moradi-Chameh, Prajay Shah, Ilan Felts Almog, Youngho Jun, Youngho, Ting Hu, Junho Jeong, Andres M. Lozano, Taufik A. Valiante, Laurent C. Moreaux, Joyce K. Poon, Michael L. Roukes, Xianshu Luo, Anton Fomenko, Thomas Lordello, Xinyu Liu, John N. Straguzzi, Trevor M. Fowler, Patrick Lo in Neurophotonics

Tools that allow neuroscientists to record and quantify functional activity within the living brain are in great demand. Traditionally, researchers have used techniques such as functional magnetic resonance imaging, but this method cannot record neural activity with high spatial resolution or in moving subjects. In recent years, a technology called optogenetics has shown considerable success in recording neural activity from animals in real time with single neuron resolution. Optogenetic tools use light to control neurons and record signals in tissues that are genetically modified to express light-sensitive and fluorescent proteins. However, existing technologies for imaging light signals from the brain have drawbacks in their size, imaging speed, or contrast that limit their applications in experimental neuroscience.

A technology called light-sheet fluorescence imaging shows promise for imaging brain activity in 3D with high speed and contrast (overcoming multiple limitations of other imaging technologies). In this technique, a thin sheet of laser light (light-sheet) is directed through a brain tissue region of interest, and fluorescent activity reporters within the brain tissues respond by emitting fluorescence signals that microscopes can detect. Scanning a light sheet in the tissue enables high-speed, high-contrast, volumetric imaging of the brain activity.

Currently, using light-sheet fluorescence brain imaging with nontransparent organisms (like a mouse) is difficult because of the size of the necessary apparatus. To make experiments with nontransparent animals and, in the future, freely moving animals feasible, researchers will first need to miniaturize many of the components.

A key component for the miniaturization is the light-sheet generator itself, which needs to be inserted into the brain and thus must be as small as possible to avoid displacing too much brain tissue. In a new study reported in Neurophotonics, an international team of researchers from the California Institute of Technology (USA), University of Toronto (Canada), University Health Network (Canada), the Max Planck Institute of Microstructure Physics (Germany), and Advanced Micro Foundry (Singapore) developed a miniature light-sheet generator, or a photonic neural probe, that can be implanted into a living animal’s brain.

The researchers used nanophotonic technology to create ultrathin silicon-based photonic neural probes that emit multiple addressable thin sheets of light with thicknesses <16 micrometers over propagation distances of 300 micrometers in free space. When tested in brain tissues from mice that were genetically engineered to express fluorescent proteins in their brains, the probes permitted the researchers to image areas as large as 240 ?m × 490 ?m. Moreover, the level of image contrast was superior to that with an alternative imaging method called epifluorescence microscopy.

Describing the significance of his team’s work, the study’s lead author, Wesley Sacher, says, “This new implantable photonic neural probe technology for generating light sheets within the brain circumvents many of the constraints that have limited the use of light-sheet fluorescence imaging in experimental neuroscience. We predict that this technology will lead to new variants of light-sheet microscopy for deep brain imaging and behavior experiments with freely moving animals.”

Such variants would be a boon to neuroscientists seeking to understand the workings of the brain.

Optical addressing method and proposal for deep-brain photonic-probe-enabled LSFM. (a) Schematic of the optical addressing method (not to scale). The scanning system addresses on-chip edge couplers via spatial addressing of the cores of an image fiber bundle. Bottom inset: micrographs of the distal facet of a fiber bundle connected to the scanning system with different cores addressed. Top inset: annotated photograph of a packaged light-sheet neural probe inserted into an agarose block. (b) Illustration of the proposed use of the light-sheet neural probe with a GRIN lens endoscope for deep brain LSFM (not to scale). In this first investigation of the probe functionality, the configuration in (b) has not been demonstrated, and instead, the results here focus on a simpler imaging configuration where light-sheet probe illuminated samples are directly imaged with a fluorescence microscope without a GRIN lens

Noninvasive neuromagnetic single-trial analysis of human neocortical population spikes

by Gunnar Waterstraat, Rainer Körber, Jan-Hendrik Storm, Gabriel Curio in Proceedings of the National Academy of Sciences

The brain processes information using both slow and fast currents. Until now, researchers had to use electrodes placed inside the brain in order to measure the latter. For the first time, researchers from Charité — Universitätsmedizin Berlin and the Physikalisch-Technische Bundesanstalt (PTB), successfully visualized these fast brain signals from the outside — and found a surprising degree of variability. According to their article, the researchers used a particularly sensitive magnetoencephalography device to accomplish this feat.

The processing of information inside the brain is one of the body’s most complex processes. Disruption of this processing often leads to severe neurological disorders. The study of signal transmission inside the brain is therefore key to understanding a myriad of diseases. From a methodological point of view, however, it creates major challenges for researchers. The desire to observe the brain’s nerve cells operating ‘at the speed of thought’, but without the need to place electrodes inside the brain, has led to the emergence of two techniques featuring high temporal resolution: electroencephalography (EEG) and magnetoencephalography (MEG). Both methods enable the visualization of brain activity from outside the skull. However, while results for slow currents are reliable, those for fast currents are not.

Slow currents — known as postsynaptic potentials — occur when signals created by one nerve cell are received by another. The subsequent firing of impulses (which transmit information to downstream neurons or muscles) produces fast currents which last for just a millisecond. These are known as action potentials. “Until now, we have only been able to observe nerve cells as they receive information, not as they transmit information in response to a single sensory stimulus,” explains Dr. Gunnar Waterstraat of Charité’s Department of Neurology with Experimental Neurology on Campus Benjamin Franklin. “One could say that we were effectively blind in one eye.” Working under the leadership of Dr. Waterstraat and Dr. Rainer Körber from the PTB, a team of researchers has now laid the foundations which are needed to change this. The interdisciplinary research group succeeded in rendering the MEG technology so sensitive as to enable it to detect even fast brain oscillations produced in response to a single sensory stimulus.

They did this by significantly reducing the system noise produced by the MEG device itself. “The magnetic field sensors inside the MEG device are submerged in liquid helium, to cool them to -269°C (4.2 K),” explains Dr. Körber. He adds: “To do this, the cooling system requires complex thermal insulation. This superinsulation consists of aluminum-coated foils which produce magnetic noise and will therefore mask small magnetic fields such as those associated with nerve cells. We have now changed the design of the superinsulation in such a way as to ensure this noise is no longer measurable. By doing this, we managed to increase the MEG technology’s sensitivity by a factor of ten.”

The researchers used the example of stimulating a nerve in the arm to demonstrate that the new device is indeed capable of recording fast brain waves. As part of their study on four healthy subjects, the researchers applied electrical stimulation to a specific nerve at the wrist whilst at the same time positioning the MEG sensor immediately above the area of the brain which is responsible for processing sensory stimuli applied to the hand. To eliminate outside sources of interference such as electric networks and electronic components, the measurements were conducted in one of the PTB’s shielded recording rooms. The researchers found that, by doing so, they were able to measure the action potentials produced by a small group of simultaneously activated neurons in the brain’s cortex in response to individual stimuli. “For the first time, a noninvasive approach enabled us to observe nerve cells in the brain sending information in response to a single sensory stimulus,” says Dr. Waterstraat. He continues: “One interesting observation was the fact that these fast brain oscillations are not uniform in nature but change with each stimulus. These changes also occurred independently of the slow brain signals. There is enormous variability in how the brain processes information about the touch of a hand, despite all of the stimuli applied being identical.”

The fact that the researchers are now able to compare individual responses to stimuli opens the way for neurology researchers to investigate questions which previously remained unanswered: To what extent do factors such as alertness and tiredness influence the processing of information in the brain? What about additional stimuli which are received at the same time? The highly sensitive MEG system could also help scientists to develop a deeper understanding of, and better treatments for, neurological disorders. Epilepsy and Parkinson’s disease are examples of disorders which are linked to disruptions in fast brain signaling. “Thanks to this optimized MEG technology, our neuroscience toolbox has gained a crucial new tool which enables us to address all of these questions noninvasively,” says Dr. Waterstraat.

Averaged somat sensory evoked responses (A and B), average phase-locked and phase-insensitive time–frequency (tf) representations of MEG responses (C and D), and analysis of excess variance in single-trial responses (E); exemplary data of subject S1. Wideband data (A) show the well-known rise to the first cortically evoked (low-frequency) postsynaptic component peaking at around 20 ms (N20m). Both the ascending and descending slopes of the N20m display humps and notches owing to superposition by low-amplitude high-frequency responses. This high-frequency somatosensory evoked response (hfSER) can be isolated as wavelet burst by phase-preserving bandpass filtering (B); dashed ancillary lines link original wideband humps with synchronous bandpassed wavelet peaks. The tf-resolved phase-locked MEG response ( C) was calculated as tf transformation of the wideband data after averaging over trials. Thus, response components with variable phases between trials have been diminished by the averaging process. The phase-insensitive MEG response (D) was calculated as average of amplitudes of all tf-transformed single-trial responses. Amplitude-variance (E) was obtained as the variance of single-trial response amplitudes after tf transformation. For visualization, tf data were normalized independently in each frequency bin as signal-plus-noise-to-noise ratio (SNNR) (color-coded) by dividing the value in each tf tile by the mean prestimulus value at the respective frequency bin. Significant tf tiles are white-rimmed; P values were FWER-corrected.

Latent Dynamical Variables Produce Signatures of Spatiotemporal Criticality in Large Biological Systems

by Mia C. Morrell, Audrey J. Sederberg, Ilya Nemenman in Physical Review Letters

The dynamics of the neural activity of a mouse brain behave in a peculiar, unexpected way that can be theoretically modeled without any fine tuning, suggests a new paper by physicists at Emory University. Physical Review Letters published the research, which adds to the evidence that theoretical physics frameworks may aid in the understanding of large-scale brain activity.

“Our theoretical model agrees with previous experimental work on the brains of mice to a few percent accuracy — a degree which is highly unusual for living systems,” says Ilya Nemenman, Emory professor of physics and biology and senior author of the paper.

The first author is Mia Morrell, who did the research for her honors thesis as an Emory senior majoring in physics. She graduated from Emory last year and is now in a post-baccalaureate physics program at Los Alamos National Laboratory in New Mexico.

“One of the wonderful things about our model is that it’s simple,” says Morrell, who will start a Ph.D. program in physics at New York University in the fall. “A brain is really complex. So to distill neural activity to a simple model and find that the model can make predictions that so closely match experimental data is exciting.”

The new model may have applications for studying and predicting a range of dynamical systems that have many components and have varying inputs over time, from the neural activity of a brain to the trading activity of a stock market. Co-author of the paper is Audrey Sederberg, a former post-doctoral fellow in Nemenman’s group, who is now on the faculty at the University of Minnesota. The work is based on a physics concept known as critical phenomena, used to explain phase transitions in physical systems, such as water changing from liquid to a gas.

In liquid form, water molecules are strongly correlated to one another. In a solid, they are locked into a predictable pattern of identical crystals. In a gas phase, however, every molecule is moving about on its own.

“At what is known as a critical point for a liquid, you cannot distinguish whether the material is liquid or vapor,” Nemenman explains. “The material is neither perfectly ordered nor disordered. It’s neither totally predictable nor totally unpredictable. A system at this ‘just right’ Goldilocks spot is said to be ‘critical.’”

Very high temperature and pressure generate this critical point for water. And the structure of critical points is the same in many seemingly unrelated systems. For example, water transitioning into a gas and a magnet losing its magnetism as it is heated up are described by the same critical point, so the properties of these two transitions are similar.

In order to actually observe a material at a critical point to study its structure, physicists must tightly control experiments, adjusting the parameters to within an extraordinarily precise range, a process known as fine-tuning.

In recent decades, some scientists began thinking about the human brain as a critical system. Experiments suggest that brain activity lies in a Goldilocks spot — right at a critical transition point between perfect order and disorder.

“The neurons of the brain don’t function just as one big unit, like an army marching together, but they are also not behaving like a crowd of people running in all different directions,” Nemenman says. “The hypothesis is that, as you increase the effective distance between neurons, the correlations between their activity are going to fall, but they will not fall to zero. The entire brain is coupled, acting like a big, interdependent machine, even while individual neurons vary in their activity.”

Researchers began searching for actual signals of critical phenomena within brains. They explored a key question: What fine tunes the brain to reach criticality?

In 2019, a team at Princeton University recorded neurons in the brain of a mouse as it was running in a virtual maze. They applied theoretical physics tools developed for non-living systems to the neural activity data from the mouse brain. Their results suggested that the neural activity exhibits critical correlations, allowing predictions about how different parts of the brain will correlate with one another over time and over effective distances within the brain.

For the current paper, the Emory researchers wanted to test whether fine-tuning of particular parameters were necessary for the observation of criticality in the mouse brain experiments, or whether the critical correlations in the brain could be achieved simply through the process of it receiving external stimuli. The idea came from previous work that Nemenman’s group collaborated on, explaining how biological systems can exhibit Zipf’s law — a unique pattern of activity found in disparate systems.

“We previously created a model that showed Zipf’s law in a biological system, and that model did not require fine tuning,” Nemenman says. “Zipf’s law is a particular form of criticality. For this paper, we wanted to make that model a bit more complicated, to see if could predict the specific critical correlations observed in the mouse experiments.”

The model’s key ingredient is a set of a few hidden variables that modulate how likely individual neurons are to be active.

Morrell wrote the computer code to run simulations and test the model on her home desktop computer. “The biggest challenge was to write the code in a way that would allow it to run fast even when simulating a large system with limited computer memory without a huge server,” she says. The model was able to closely reproduce the experimental results in the simulations. The model does not require the careful tuning of parameters, generating activity that is apparently critical by any measure over a wide range of parameter choices.

“Our findings suggest that, if you do not view a brain as existing on its own, but you view it as a system receiving stimuli from the external world, then you can have critical behavior with no need for fine tuning,” Nemenman says. “It raises the question of whether something similar could apply to non-living physical systems. It makes us re-think the very notion of criticality, which is a fundamental concept in physics.”

The computer code for the model is now available online, so that anyone with a laptop computer can access it and run the code to simulate a dynamic system with varying inputs over time.

“The model we developed may apply beyond neuroscience, to any system in which widespread coupling to hidden variables is extant,” Nemenman says. “Data from many biological or social systems are likely to appear critical via the same mechanism, without fine-tuning.”

Distribution of coarse-grained variables for k=N/16, N/32, N/64, N/128 modes retained under momentum-space coarse graining, with a Gaussian distribution (gray dashed line) shown for comparison. Note that the momentum-space coarse-grained variables may take negative values. The distribution of coarse-grained variables approaches a non-Gaussian limit as k decreases. Error bars are standard deviations over randomly selected contiguous quarters of the simulation.

Mouse prefrontal cortex represents learned rules for categorization

by Sandra Reinert, Mark Hübener, Tobias Bonhoeffer, Pieter M. Goltstein in Nature

Categorization is the brain’s tool to organize nearly everything we encounter in our daily lives. Grouping information into categories simplifies our complex world and helps us to react quickly and effectively to new experiences. Scientists at the Max Planck Institute of Neurobiology have now shown that also mice categorize surprisingly well. The researchers identified neurons encoding learned categories and thereby demonstrated how abstract information is represented at the neuronal level.

A toddler is looking at a new picture book. Suddenly it points to an illustration and shouts ‘chair’. The kid made the right call, but that does not seem particularly noteworthy to us. We recognize all kinds of chairs as ‘chair’ without any difficulty. For a toddler, however, this is an enormous learning process. It must associate the chair pictured in the book with the chairs it already knows — even though they may have different shapes or colors. How does the child do that?

The answer is categorization, a fundamental element of our thinking. Sandra Reinert, first author of the study explains: “Every time a child encounters a chair, it stores the experience. Based on similarities between the chairs, the child’s brain will abstract the properties and functions of chairs by forming the category ‘chair’. This allows the child to later quickly link new chairs to the category and the knowledge it contains.”

Our brain categorizes continuously: not only chairs during childhood, but any information at any given age. What advantage does that give us? Pieter Goltstein, senior author of the study says: “Our brain is trying to find a way to simplify and organize our world. Without categorization, we would not be able to interact with our environment as efficiently as we do.” In other words: We would have to learn for every new chair we encounter that we can sit on it. Categorizing sensory input is therefore essential for us, but the underlying processes in the brain are largely unknown.

Mice categorize surprisingly well

Sandra Reinert and Pieter Goltstein, together with Mark Hübener and Tobias Bonhoeffer, group leader and director at the Max Planck Institute of Neurobiology, studied how the brain stores abstract information like learned categories. Since this is difficult to investigate in humans, the scientists tested whether mice categorize in a way similar to us. To do so, they showed mice different pictures of stripe patterns and gave them a sorting rule. One animal group had to sort the pictures into two categories based on the thickness of the stripes, the other group based on their orientation. The mice were able to learn the respective rule and reliably sorted the patterns into the correct category. After this initial training phase, they even assigned patterns of stripes they had not seen before into the correct categories — just like the child with the new book.

And not only that: when the researchers switched the sorting rules, the mice ignored what they had learned before and re-sorted the pictures according to the new rule — something we humans do all the time while learning new things. Therefore, the study demonstrates for the first time to what extent and with which precision mice categorize and thereby approach our capacity for abstraction.

Neurons gradually develop a category representation

With this insight, the researchers were now able to investigate the basis of categorization in the mouse brain. They focused on the prefrontal cortex, a brain region which in humans is involved in complex thought processes. The investigations revealed that certain neurons in this area become active when the animals sort the striped patterns into categories. Interestingly, different groups of neurons reacted selectively to individual categories.

Tobias Bonhoeffer explains: “The discovery of category-selective neurons in the mouse brain was a key point. It allowed us for the first time to observe the activity of such neurons from the beginning to the end of category learning. This showed that the neurons don’t acquire their selectivity immediately, but only gradually develop it during the learning process.”

Category-selective neurons are part of long-term memory

The scientists argue that the category-selective neurons in prefrontal cortex only play a role once the acquired knowledge has been shifted from short-term to long-term memory. There, the cells store the categories as part of semantic memory — the collection of all factual knowledge. In this context, we should keep in mind that the categories we learn are the brain’s way to make our world simpler. However, that also means that those categories are not necessarily ‘right’ or correctly reflect reality.

By investigating category learning in the mouse, the study adds important details to the neuronal basis of abstract thinking and reminds us that complex thoughts are not only reserved for us humans.

a, Schematic of behavioural training setup. b, Schematic of trial structure in the Go/NoGo task. ITI, inter-trial interval; Stim./resp., stimulus presentation/response window. c, Performance (d′) of 11 mice in each training session. Individual traces aligned to criterion (66% of correct trials). The dashed line indicates chance level (d′ = 0). Crosses denote sessions with two-photon imaging (T1–T8). The spread in performance after T2 is due to day-to-day variability rather than mouse-to-mouse variability. TP, time point. d, Fraction of Go choices per stimulus of an example mouse at each time point (of two-photon imaging) until the presentation of all 36 stimuli of rule 1 (generalization; T5). e, Performance (d′) for rule 1 (T5), for experienced (Exp.) compared to novel (Nov.) stimuli. P = 0.50, two-tailed paired-samples t-test (n = 11 mice). Grey lines denote individual mice. Data are mean ± s.e.m. f, Number of training sessions until criterion (66% correct, exemplar stimuli). Bars indicate mean across mice, dots are individual mice (green denotes the orientation rule; orange denotes spatial frequency rule). Rule 2 is learned significantly faster than rule 1. P = 9.77 × 10−4, two-tailed Wilcoxon matched-pairs signed-rank (WMPSR) test (n = 11 mice). g, As in d, for rule 2 of the same mouse. h, As in e for rule 2 (T8). d′ did not differ significantly between novel stimuli and stimuli experienced with rule 2. P = 0.09, two-tailed paired-samples t-test (n = 10 mice). i, Schematics specifying the distance of stimuli to the boundary. j, Psychometric curves showing the fraction of Go choices along the relevant (black) and irrelevant (blue) dimension of rule 1 at T1, T5 and T8. Left: Prelevant T1 = 0.36, Pirrelevant T1 = 0.77; middle: ***Prelevant T5 = 1.73 × 10−6, Pirrelevant T5 = 0.09; right: Prelevant T5 = 0.73, ***Pirrelevant T5 (relevant T8) = 1.73 × 10−6, two-tailed WMPSR test, Bonferroni-corrected for two comparisons (n = 10 mice). Categorization performance was not affected by the order in which mice were trained on orientation and spatial frequency rules.

Neural responses to heartbeats detect residual signs of consciousness during resting state in post-comatose patients

by Diego Candia-Rivera, Jitka Annen, Olivia Gosseries, Charlotte Martial, Aurore Thibaut, Steven Laureys, Catherine Tallon-Baudry in The Journal of Neuroscience

A new study conducted jointly by the University of Liege (Belgium) and the Ecole normale superieure — PSL (France) shows that heart brain interactions, measured using electroencephalography (EEG), provide a novel diagnostic avenue for patients with disorders of consciousness.

Catherine Tallon-Baudry (ENS, CNRS) introduces : “The scientific community already knew that in healthy participants, the brain’s response to heartbeats is related to perceptual, bodily and self-consciousness. We now show that we can obtain clinically meaningful information if we probe this interaction in patients with disorders of consciousness.” In the past decades several important improvements for the diagnosis of these patients have been made, yet, it remains a big challenge to measure self-consciousness in these patients that cannot communicate.

For their study, the researchers included 68 patients with a disorder of consciousness. Fifty-five patients suffered from the minimally conscious state, and showed fluctuating but consistent signs of consciousness but were unable to communicate, and 13 patients in the unresponsive wakefulness state (previously called vegetative state) who do not show any behavioural sign of awareness. These patients were diagnosed using the coma recovery scale-revised, a standardized clinical test to assess conscious behaviour.

“As these patients suffered from severe brain injury, they might be unable to show behavioural signs of awareness. Therefore, we also based our diagnosis on the brain’s metabolism as probe for consciousness. This is a state-of-the art neuroimaging technique that helps to improve the diagnosis of patients with disorders of consciousness. Although these scans are very informative, they can only be acquired in specialized centers,” says Jitka Annen (GIGA Consciousness, ULiege).

The researchers recorded brain activity during resting state (i.e. without specific task or stimulation). They selected EEG segments right after a heartbeat and EEG segments at random timepoints (i.e. not time-locked to a heartbeat). They then used machine learning algorithms to classify (or diagnose) patients into the two diagnostic groups.

Diego Candia-Rivera (ENS) further comments: “EEG segments not locked to heartbeats were informative to predict if a patient was conscious or not, but EEG segments locked to heartbeats were more accurate in doing so. Our results indicate that the heartbeat evoked potential can give us supplementary evidence for the presence of consciousness.”

It is important to note that the heartbeat evoked responses were more in accordance with the diagnosis based on brain metabolism than the diagnosis based on behavioural assessment. It seems therefore that the heartbeat evoked response can be used to measure a perspective of self-consciousness that is not assessed successfully using behavioural tools.

“The next challenge is to translate our findings to clinical applications so that all patients with disorders of consciousness can benefit from better diagnosis using widely available bedside assessment technologies,” concludes Steven Laureys, head of GIGA Consciousness research unit and Centre du Cerveau (ULiege, CHU Liege).

Neural encoding of voice pitch and formant structure at birth as revealed by frequency-following responses

by Sonia Arenillas-Alcón, Jordi Costa-Faidella, Teresa Ribas-Prats, María Dolores Gómez-Roig, Carles Escera in Scientific Reports

People’s ability to perceive speech sounds has been deeply studied, specially during someone’s first year of life, but what happens during the first hours after birth? Are babies born with innate abilities to perceive speech sounds, or do neural encoding processes need to age for some time?

Researchers from the Institute of Neurosciences of the University of Barcelona (UBNeuro) and the Sant Joan de Déu Research Institute (IRSJD) have created a new methodology to try to answer this basic question on human development.

The results, confirm that newborn neural encoding of voice pitch is comparable to the adults’ sabilities after three years of being exposed to language. However, there are differences regarding the perception of spectral and temporal fine structures of sounds, which consists on the ability to distinguish between vocal sounds such as /o/ and /a/. Therefore, according to the authors, neural encoding of this sound aspect, recorded for the first time in this study, is not found mature enough after being born, but it needs a certain exposure to the language as well as stimulation and time to develop.

According to the researchers, knowing the level of development typical in these neural encoding processes from birth will enable them to make an “early detection of language impairments, which would provide an early intervention or stimulus to reduce future negative consequences.”

Decoding the spectral and temporal fine structure of sound

In order to distinguish the neural response to speech stimuli in newborns, one of the main challenges was to record, using the baby’s electroencephalogram, a specific brain response: the frequency-following response (FFR). The FFR provides information on the neural encoding of two specific features of sound: fundamental frequency, responsible for the perception of voice pitch (high or low), and the spectral and temporal fine structure. The precise encoding of both features is, according to the study, “fundamental for the proper perception of speech, a requirement in future language acquisition.”

To date, the available tools to study this neural encoding enabled researchers to determine whether the newborn’s baby was able to encode inflections in the voice pitch, but it did not when it came to the spectral and temporal fine structure. “Inflections in voice pitch contour are very important, especially in tonal variations like in Mandarin, as well as to perceive the prosody from speech that transmits emotional content of what is said. However, the spectral and temporal fine structure of sound is the most relevant aspect in language acquisition regarding non-tonal languages like ours, and the few existing studies on the issue do not inform about the precision with which a newborn’s brain encodes it,” note the authors.

The main cause of this lack of studies is the technical limitation caused by the type of sounds used to conduct these tests. Therefore, authors have developed a new stimulus (/oa/) whose internal structure (increasing change in voice pitch, two different vocals) allows them to evaluate the precision of the neural encoding of both features of the sound simultaneously using the FFR analysis.

An adapted test to the limitations of the hospital environment

One of the most highlighted aspects of the study is that the stimulus and the methodology are compatible to the typical limitations of the hospital environment in which the tests are carried out. “Time is essential in the FFR research with newborns. On the one hand, because recording time limitations determine the stimuli they can record. On the other hand, for the actual conditions of the situation of newborns in hospitals, where there is a frequent and continuous access to the baby and the mother so they receive the required care and undergo evaluations and routine tests to rule out health problems,” authors add. Considering these restrictions, the responses of the 34 newborns that were part of the study were recorded in sessions that lasted between twenty and thirty minutes, almost half the time used in common sessions in studies on speech sound discrimination.

A potential biomarker of learning problems

After this study, the objective of the researchers is to characterize the development f neural encoding of the spectral and temporal fine structure of speech sounds over time. To do so, they are currently recording the frequency-following response in those babies that took part in the present study, who are now 21 months old. “Given that the two first years of life are a critical period of stimulation for language acquisition, this longitudinal evaluation of the development will enable us to have a global view on how these encoding skills mature over the first months of life,” note the researchers.

The aim is to confirm whether the observed alterations -after birth- in neural encoding of sounds are confirmed with the appearance of observable deficits in infant language development. If that happens, “that neural response could be certainly considered a useful biomarker in early detection of future literacy difficulties, just like detected alterations in newborns could predict the appearance of delays in language development. This is the objective of the ONA project, funded by the Spanish Ministry of Science and Innovation,” they conclude.

Temporal representation of the stimulus (a); FFRENV (b) and FFRTFS ( c). (a) Time waveform (top) and spectrogram of the /oa/ stimulus with schematic overlay of the formant structure trajectory (targeted F0 and F1 in solid lines; non-analyzed F2 depicted in dotted line). (b) Grand averaged time-domain waveform of the FFRENV from newborns (top red) and adults (bottom blue), obtained by averaging the neural responses to the two stimulus polarities. (c )Grand averaged time-domain waveform of the FFRTFS from newborns (top red) and adults (bottom blue), obtained by subtracting the neural responses to the two stimulus polarities.

Evolution of genetic networks for human creativity

by I. Zwir, C. Del-Val, M. Hintsanen, K. M. Cloninger, R. Romero-Zaliz, A. Mesa, J. Arnedo, R. Salas, G. F. Poblete, E. Raitoharju, O. Raitakari, L. Keltikangas-Järvinen, G. A. de Erausquin, I. Tattersall, T. Lehtimäki, C. R. Cloninger in Molecular Psychiatry

A new study is the first-ever to identify the genes for creativity in Homo sapiens that distinguish modern humans from chimpanzees and Neanderthals. The research identified 267 genes that are found only in modern humans and likely play an important role in the evolution of the behavioral characteristics that set apart Homo sapiens, including creativity, self-awareness, cooperativeness, and healthy longevity.

“One of the most fundamental questions about human nature is what sparked the explosive emergence of creativity in modern humans in the period just before and after their widespread dispersal from Africa and the related extinction of Neanderthals and other human relatives,” said study co-author Ian Tattersall, curator emeritus in the American Museum of Natural History’s Division of Anthropology. “Major controversies persist about the basis for human creativity in art and science, as well as about potential differences in cognition, language, and personality that distinguish modern humans from extinct hominids. This new study is the result of a truly pathbreaking use of genomic methodologies to enlighten us about the mechanisms underpinning our uniqueness.”

Modern humans demonstrate remarkable creativity compared to their closest living relatives, the great apes (chimpanzees, gorillas, and orangutans and their immediate ancestors), including innovativeness, flexibility, depth of planning, and related cognitive abilities for symbolism and self-awareness that also enable spontaneous generation of narrative art and language. But the genetic basis for the emergence of creativity in modern humans remains a mystery, even after the recovery of full-genome data for both chimpanzees and our extinct close relatives the Neanderthals.

“It has been difficult to identify the genes that led to the emergence of human creativity before now because of the large number of changes in the human genome after it diverged from the common ancestor of humans and chimpanzees around 10 million years ago, as well as uncertainty about the functions of those changes,” said Robert Cloninger, a psychiatrist and geneticist at Washington University in St. Louis, and the lead author of the study. “Therefore, we began our research by first identifying the way the genes that influence modern human personality are organized into coordinated systems of learning that have allowed us to adapt flexibly and creatively to changing life conditions.”

The team led by Cloninger had previously identified 972 genes that regulate gene expression for human personality, which is comprised of three nearly separate networks for learning and memory. One, for regulating emotional reactivity — emotional drives, habit learning, social attachment, conflict resolution — emerged in monkeys and apes about 40 million years ago. The second, which regulates intentional self-control — self-directedness and cooperation for mutual benefit — emerged a little less than 2 million years ago. A third one, for creative self-awareness, emerged about 100,000 years ago.

In the latest study, the researchers discovered that 267 genes from this larger group are found only in modern humans and not in chimpanzees or Neanderthals. These uniquely human genes code for the self-awareness brain network and also regulate processes that allow Homo sapiens to be creative in narrative art and science, to be more prosocial, and to live longer lives through greater resistance to aging, injury, and illness than the now-extinct hominids they replaced.

Genes regulating emotional reactivity were nearly the same in humans, Neanderthals, and chimps. And Neanderthals were about midway between chimps and Homo sapiens in their genes for self-control and self-awareness.

“We found that the adaptability and well-being of Neanderthals was about 60 to 70 percent of that of Homo sapiens, which means that the difference in fitness between them was large,” Cloninger said. “After the more creative, sociable, and physically resilient Homo sapiens migrated out of Africa between 65,000 and 55,000 years ago, they displaced Neanderthals and other hominids, who all became extinct soon after 40,000 years ago.”

The genes that distinguish modern humans from Neanderthals and chimpanzees are nearly all regulatory genes made of RNA, not protein-coding genes made of DNA.

“The protein-coding genes of Homo sapiens, Neanderthals, and chimps are nearly all the same, and what distinguishes these species is the regulation of the expression of their protein-coding genes by the genes found only in humans,” said co-author Igor Zwir, a computer scientist at Washington University School of Medicine and the University of Granada. “We found that the regulatory genes unique to modern humans were constituents of clusters together with particular protein-coding genes that are overexpressed in the human brain network for self-awareness. The self-awareness network is essential to the physical, mental, and social well-being of humans because it provides the insight to regulate our habits in accord with our goals and values.”

The researchers determined that the genes unique to modern humans were selected because of advantages tied to greater creativity, prosocial behavior, and healthy longevity. Living longer, healthier lives and being more prosocial and altruistic allowed Homo sapiens to support their children, grandchildren, and others in their communities throughout their lives in diverse and sometimes harsh conditions. And being more innovative than other hominids allowed humans to adapt more flexibly to unpredictable climatic fluctuations.

“In the bigger picture, this study helps us understand how we can effectively respond to the challenges that modern humans currently face,” Tattersall said. “Our behavior is not fixed or determined by our genes. Indeed, human creativity, prosociality, and healthy longevity emerged in the context of the need to adjust rapidly to harsh and diverse conditions and to communicate in large social groups.”

Added co-author Coral del Val of the University of Granada, “Now, we face similar challenges to which we must also respond creatively, as we did originally. Unfortunately, when we are exposed to conditions of fear, conflict, inequity, abuse or neglect, our self-awareness is impaired, which diminishes our ability to use our potential for creativity and to achieve well-being. Learning more about the regulatory genes unique to modern humans may help us to promote human well-being as we face these new environmental and social challenges.”

Comparative analysis of the distinct types of genes belonging to the Emotional reactivity, Self-control, and Self-awareness networks of genes present in (A) Chimpanzees (Pan troglodytes) (B) Neanderthals (Homo neanderthalensis) and (c )modern humans (Homo sapiens).

Conserved genetic signatures parcellate cardinal spinal neuron classes into local and projection subsets

by Peter J. Osseward, Neal D. Amin, Jeffrey D. Moore, Benjamin A. Temple, Bianca K. Barriga, Lukas C. Bachmann, Fernando Beltran, Miriam Gullo, Robert C. Clark, Shawn P. Driscoll, Samuel L. Pfaff, Marito Hayashi in Science

Spinal cord nerve cells branching through the body resemble trees with limbs fanning out in every direction. But this image can also be used to tell the story of how these neurons, their jobs becoming more specialized over time, arose through developmental and evolutionary history. Salk researchers have, for the first time, traced the development of spinal cord neurons using genetic signatures and revealed how different subtypes of the cells may have evolved and ultimately function to regulate our body movements.

The findings, offer researchers new ways of classifying and tagging subsets of spinal cord cells for further study, using genetic markers that differentiate branches of the cells’ family tree.

“A study like this provides the first molecular handles for scientists to go in and study the function of spinal cord neurons in a much more precise way than they ever have before,” says senior author of the study Samuel Pfaff, Salk Professor and the Benjamin H. Lewis Chair. “This also has implications for treating spinal cord injuries.”

Spinal neurons are responsible for transmitting messages between the spinal cord and the rest of the body. Researchers studying spinal neurons have typically classified the cells into “cardinal classes,” which describe where in the spinal cord each type of neuron first appears during fetal development. But, in an adult, neurons within any one cardinal class have varied functions and molecular characteristics. Studying small subsets of these cells to tease apart their diversity has been difficult. However, understanding these subset distinctions is crucial to helping researchers understand how the spinal cord neurons control movements and what goes awry in neurogenerative diseases or spinal cord injury.

“It’s been known for a long time that the cardinal classes, as useful as they are, are incomplete in describing the diversity of neurons in the spinal cord,” says Peter Osseward, a graduate student in the Pfaff lab and co-first author of the new paper, along with former graduate student Marito Hayashi, now a postdoctoral fellow at Harvard University.

Pfaff, Osseward and Hayashi turned to single-cell RNA sequencing technologies to analyze differences in what genes were being activated in almost 7,000 different spinal neurons from mice. They used this data to group cells into closely related clusters in the same way that scientists might group related organisms into a family tree.

The first major gene expression pattern they saw divided spinal neurons into two branches: sensory-related neurons (which carry information about the environment through the spinal cord) and motor-related neurons (which carry motor commands through the spinal cord). This suggests that, in an ancient organism, one of the first steps in spinal cord evolution may have been a division of labor of spinal neurons into motor versus sensory roles, Pfaff says.

When the team analyzed the next branches in the family tree, they found that the sensory-related neurons then split into excitatory and inhibitory neurons — a division that describes how the neuron sends information. But when the researchers looked at motor-related neurons, they found a more surprising division: the cells clumped into two distinct groups based on a new genetic marker. When the team stained cells belonging to each group in the spinal cord, it became clear that the markers differentiated neurons based on whether they had long-range or short-range connections in the body. Further experiments revealed that the genetic patterns specific to long-range and short-range properties were common across all the cardinal classes tested.

“The assumption in the field was that the genetic rules of specifying long-range versus short-range neurons would be specific to each cardinal class,” say Osseward and Hayashi. “So it was really interesting to see that it actually transcended cardinal class.”

The observation was more than just interesting — it turned out to be useful as well. Previously, it might have taken many different genetic tags to narrow in on one particular neuron type that a researcher wanted to study. Using this many markers is technically challenging and largely prevented researchers from studying just one subtype of spinal cord neuron at a time.

With the new rules, just two tags — a previously known marker for cardinal class and the newly discovered genetic marker for long-range or short-range properties — can be used to flag very specific populations of neurons. This is useful, for instance, in studying which groups of neurons are affected by a spinal cord injury or neurodegenerative disease and, eventually, how to regrow those particular cells.

The evolutionary origin of the spinal neuron family tree studied in the new paper is likely very ancient because the genetic markers they discovered are conserved across many species, the researchers say. So, although they didn’t study spinal neurons from animals other than mice, they predict that the same genetic patterns would be seen in most living animals with spinal cords.

“This is primordial stuff, relevant for everything from amphibians to humans,” says Pfaff. “And in the context of evolution, these genetic patterns tell us what kind of neurons might have been found in some of the very earliest organisms.”

Chaperone-mediated autophagy prevents collapse of the neuronal metastable proteome

by Mathieu Bourdenx, Adrián Martín-Segura, Aurora Scrivo, Jose A. Rodriguez-Navarro, Susmita Kaushik, Inmaculada Tasset, Antonio Diaz, Nadia J. Storm, Qisheng Xin, Yves R. Juste, Erica Stevenson, Enrique Luengo, Cristina C. Clement, Se Joon Choi, Nevan J. Krogan, Eugene V. Mosharov, Laura Santambrogio, Fiona Grueninger, Ludovic Collin, Danielle L. Swaney, David Sulzer, Evripidis Gavathiotis, Ana Maria Cuervo in Cell

Researchers at Albert Einstein College of Medicine have designed an experimental drug that reversed key symptoms of Alzheimer’s disease in mice. The drug works by reinvigorating a cellular cleaning mechanism that gets rid of unwanted proteins by digesting and recycling them.

“Discoveries in mice don’t always translate to humans, especially in Alzheimer’s disease,” said co-study leader Ana Maria Cuervo, M.D., Ph.D., the Robert and Renée Belfer Chair for the Study of Neurodegenerative Diseases, professor of developmental and molecular biology, and co-director of the Institute for Aging Research at Einstein. “But we were encouraged to find in our study that the drop-off in cellular cleaning that contributes to Alzheimer’s in mice also occurs in people with the disease, suggesting that our drug may also work in humans.” In the 1990s, Dr. Cuervo discovered the existence of this cell-cleaning process, known as chaperone-mediated autophagy (CMA) and has published 200 papers on its role in health and disease.

CMA becomes less efficient as people age, increasing the risk that unwanted proteins will accumulate into insoluble clumps that damage cells. In fact, Alzheimer’s and all other neurodegenerative diseases are characterized by the presence of toxic protein aggregates in patients’ brains. The Cell paper reveals a dynamic interplay between CMA and Alzheimer’s disease, with loss of CMA in neurons contributing to Alzheimer’s and vice versa. The findings suggest that drugs for revving up CMA may offer hope for treating neurodegenerative diseases.

Establishing CMA’s Link to Alzheimer’s

Dr. Cuervo’s team first looked at whether impaired CMA contributes to Alzheimer’s. To do so, they genetically engineered a mouse to have excitatory brain neurons that lacked CMA. The absence of CMA in one type of brain cell was enough to cause short-term memory loss, impaired walking, and other problems often found in rodent models of Alzheimer’s disease. In addition, the absence of CMA profoundly disrupted proteostasis — the cells’ ability to regulate the proteins they contain. Normally soluble proteins had shifted to being insoluble and at risk for clumping into toxic aggregates.

Dr. Cuervo suspected the converse was also true: that early Alzheimer’s impairs CMA. So she and her colleagues studied a mouse model of early Alzheimer’s in which brain neurons were made to produce defective copies of the protein tau. Evidence indicates that abnormal copies of tau clump together to form neurofibrillary tangles that contribute to Alzheimer’s. The research team focused on CMA activity within neurons of the hippocampus — the brain region crucial for memory and learning. They found that CMA activity in those neurons was significantly reduced compared to control animals.

What about early Alzheimer’s in people — does it block CMA too? To find out, the researchers looked at single-cell RNA-sequencing data from neurons obtained postmortem from the brains of Alzheimer’s patients and from a comparison group of healthy individuals. The sequencing data revealed CMA’s activity level in patients’ brain tissue. Sure enough, CMA activity was somewhat inhibited in people who had been in the early stages of Alzheimer’s, followed by much greater CMA inhibition in the brains of people with advanced Alzheimer’s.

“By the time people reach the age of 70 or 80, CMA activity has usually decreased by about 30% compared to when they were younger,” said Dr. Cuervo. “Most peoples’ brains can compensate for this decline. But if you add neurodegenerative disease to the mix, the effect on the normal protein makeup of brain neurons can be devastating. Our study shows that CMA deficiency interacts synergistically with Alzheimer’s pathology to greatly accelerate disease progression.”

A New Drug Cleans Neurons and Reverses Symptoms

In an encouraging finding, Dr. Cuervo and her team developed a novel drug that shows potential for treating Alzheimer’s. “We know that CMA is capable of digesting defective tau and other proteins,” said Dr. Cuervo. “But the sheer amount of defective protein in Alzheimer’s and other neurodegenerative diseases overwhelms CMA and essentially cripples it. Our drug revitalizes CMA efficiency by boosting levels of a key CMA component.”

In CMA, proteins called chaperones bind to damaged or defective proteins in cells of the body. The chaperones ferry their cargo to the cells’ lysosomes — membrane-bound organelles filled with enzymes, which digest and recycle waste material. To successfully get their cargo into lysosomes, however, chaperones must first “dock” the material onto a protein receptor called LAMP2A that sprouts from the membranes of lysosomes. The more LAMP2A receptors on lysosomes, the greater the level of CMA activity possible. The new drug, called CA, works by increasing the number of those LAMP2A receptors.

“You produce the same amount of LAMP2A receptors throughout life,” said Dr. Cuervo. “But those receptors deteriorate more quickly as you age, so older people tend to have less of them available for delivering unwanted proteins into lysosomes. CA restores LAMP2A to youthful levels, enabling CMA to get rid of tau and other defective proteins so they can’t form those toxic protein clumps.” (Also this month, Dr. Cuervo’s team reported in Nature Communications that, for the first time, they had isolated lysosomes from the brains of Alzheimer’s disease patients and observed that reduction in the number of LAMP2 receptors causes loss of CMA in humans, just as it does in animal models of Alzheimer’s.)

The researchers tested CA in two different mouse models of Alzheimer’s disease. In both disease mouse models, oral doses of CA administered over 4 to 6 months led to improvements in memory, depression, and anxiety that made the treated animals resemble or closely resemble healthy, control mice. Walking ability significantly improved in the animal model in which it was a problem. And in brain neurons of both animal models, the drug significantly reduced levels of tau protein and protein clumps compared with untreated animals.

“Importantly, animals in both models were already showing symptoms of disease, and their neurons were clogged with toxic proteins before the drugs were administered,” said Dr. Cuervo. “This means that the drug may help preserve neuron function even in the later stages of disease. We were also very excited that the drug significantly reduced gliosis — the inflammation and scarring of cells surrounding brain neurons. Gliosis is associated with toxic proteins and is known to play a major role in perpetuating and worsening neurodegenerative diseases.”

Capturing the Effects of Domestication on Vocal Learning Complexity

by Thomas O’Rourke, Pedro Tiago Martins, Rie Asano, Ryosuke O. Tachibana, Kazuo Okanoya, Cedric Boeckx in Trends in Cognitive Sciences

Language is one of the most notable abilities humans have. It allows us to express complex meanings and transmit knowledge from generation to generation. An important question in human biology is how this ability ended up being developed, and researchers from the universities of Barcelona, Cologne and Tokyo have treated this issue in a recent article.

The article counts on the participation of the experts from the Institute of Complex Systems of the UB (UBICS) Thomas O’Rourke and Pedro Tiago Martins, led by Cedric Boeckx, ICREA research professor at the Faculty of Philology and Communication. According to the new study, the evolution of the language would be related to another notable feature of the Homo Sapiens: tolerance and human cooperation.

The study is based on evidence from diverse fields such as archaeology, evolutionary genomics, neurobiology, animal behaviour and clinical researcher on neuropsychiatric disorders. With these, it shows that the reduction of reactive aggressiveness, resulting from the evolution and process of self-domestication of our species, could have led to an increase in the complexity of speech. According to the authors, this development would be caused by the lowest impact on brain networks of stress hormones, neurotransmitters that activate in aggressive situations, and which would be crucial when learning to speak. To show this interaction, researchers analysed the genomic, neurobiological and singing-type differences between the domesticated Bengalese finch and its closest wild relative.

Looking for keys of the human language evolution in bird singing

A central aspect of the approach of the authors regarding the evolution of the language is that the aspects that make it special can be elucidated by comparing them to other animals’ communication systems. “For instance, see how kids learn to talk and how birds learn to sing: unlike most animal communication systems, young birds’ singing and the language of kids are only properly developed in presence of adult tutors. Without the vocal support from adults, the great range of sounds available for humans and singing birds does not develop properly,” note researchers.

Moreover, although speaking and bird singing evolved independently, authors suggest both communication systems are associated with similar patterns in the brain connectivity and are negatively affected by stress: “Birds that are regularly under stress during their development sing a more stereotypical song as adults, while children with chronic stress problems are more susceptible to developing repetitive tics, including vocalizations in the case of Tourette syndrome.”

In this context, Kazuo Okanoya, one of the authors of the article, has been studying the Bengalese finch (Lonchura striata domestica) for years. This domesticated singing bird sings a more varied and complex song than its wild ancestor. The study shows that the same happens with other domesticated species: the Bengalese finch has a weakened response to stress and is less aggressive than its wild relative. In fact, according to the authors, there is more and more “evidence of multiple domesticated species to have altered vocal repertoires compared to their wild counterparts.”

The impact of domestication in stress and aggressiveness

For the researchers, these differences between domestic and wild animals are “the central pieces in the puzzle of the evolution of human language,” since our species shares with other domestic animals particular physical changes related to their closest wild species. Modern humans have a plain face, a round skull and a reduced size of teeth compared to our extinct archaic relatives, Neanderthals. Domestic animals have comparable changes in facial and cranial bone structures, often accompanied by the development of other traits such as skin depigmentation, floppy ears and curly tails. Last, modern humans have marked reductions in the response measures to stress and reactive aggression compared to other living apes. These similarities do not stop with physical since, according to researchers, the genomes of modern humans and multiple domesticated species show changes focused on the same genes.

In particular, a disproportionate number of these genes would negatively regulate the activity of the glutamate neurotransmitter system, which drives the brain’s response to stressful experiences. Authors note that “glutamate, the brain’s main excitatory neurotransmitter, dopamine, in learning birdsong, aggressive behaviour, and the repetitive vocal tics of Tourette syndrome.”

Alterations in stress hormone balance in the striated body

In the study, authors show how the activity of glutamate tends to promote the release of dopamine in the striated body, an evolutionary old brain structure important for learning which is based on rewards and motor activities. “In adult songbirds, the increase in dopamine release in this striatal area is correlated to the learning of a more restricted song, which replaces experimental vocalizations typical of young birds.” “Regarding human beings and other mammals -authors add-, dopamine release in the dorsal striatum promotes restrictive and repetitive motor activities, such as vocalizations, while other more experimental and exploratory behaviours are supported by the dopaminergic activity of the ventral striatum.” According to the study, many of the involved genes in the glutamatergic activation that changed in the recent human evolution, codify the signalling of receptors that reduce the excitation of the dorsal striatum. That is, these reduce the dopamine release in this area. Meanwhile, these receptors tend not to reduce, and even promote, the dopamine release in ventral striatal regions.

The authors say these alterations in the balance of stress hormones in the striated body were an important advance in the evolution of vocal speech in the lineage of modern humans. “These results suggest the glutamate system and its interactions with dopamine are involved in the process in which humans acquired their varied and flexible ability to speak. Therefore, the natural selection against reactive aggressiveness that took place in our species would have altered the interaction of these neurotransmitters promoting the communicative skills of our species. These findings shed light on new ways for comparative biological research on the human ability of speech” conclude researchers.

MISC

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Nature Neuroscience

Science Daily

Technology Networks

Frontiers

Cell

--

--