Published in


NS/ ‘Neuroprosthesis’ restores words to man with paralysis

Neuroscience biweekly vol. 37, 7th July — 21st July


Neuroscience market

The global neuroscience market size was valued at USD 28.4 billion in 2016 and it is expected to reach USD 38.9 billion by 2027.

Latest news and researches

Neuroprosthesis for Decoding Speech in a Paralyzed Person with Anarthria

by David A. Moses, Sean L. Metzger, Jessie R. Liu, Gopala K. Anumanchipalli, Joseph G. Makin, Pengfei F. Sun, Josh Chartier, Maximilian E. Dougherty, Patricia M. Liu, Gary M. Abrams, Adelyn Tu-Chan, Karunesh Ganguly, Edward F. Chang in New England Journal of Medicine

Researchers at UC San Francisco have successfully developed a “speech neuroprosthesis” that has enabled a man with severe paralysis to communicate in sentences, translating signals from his brain to the vocal tract directly into words that appear as text on a screen.

The achievement, which was developed in collaboration with the first participant of a clinical research trial, builds on more than a decade of effort by UCSF neurosurgeon Edward Chang, MD, to develop a technology that allows people with paralysis to communicate even if they are unable to speak on their own.

“To our knowledge, this is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralyzed and cannot speak,” said Chang, the Joan and Sanford Weill Chair of Neurological Surgery at UCSF, Jeanne Robertson Distinguished Professor, and senior author on the study. “It shows strong promise to restore communication by tapping into the brain’s natural speech machinery.”

Each year, thousands of people lose the ability to speak due to stroke, accident, or disease. With further development, the approach described in this study could one day enable these people to fully communicate.

Previously, work in the field of communication neuroprosthetics has focused on restoring communication through spelling-based approaches to type out letters one-by-one in text. Chang’s study differs from these efforts in a critical way: his team is translating signals intended to control muscles of the vocal system for speaking words, rather than signals to move the arm or hand to enable typing. Chang said this approach taps into the natural and fluid aspects of speech and promises more rapid and organic communication.

“With speech, we normally communicate information at a very high rate, up to 150 or 200 words per minute,” he said, noting that spelling-based approaches using typing, writing, and controlling a cursor are considerably slower and more laborious. “Going straight to words, as we’re doing here, has great advantages because it’s closer to how we normally speak.”

Over the past decade, Chang’s progress toward this goal was facilitated by patients at the UCSF Epilepsy Center who were undergoing neurosurgery to pinpoint the origins of their seizures using electrode arrays placed on the surface of their brains. These patients, all of whom had normal speech, volunteered to have their brain recordings analyzed for speech-related activity. Early success with these patient volunteers paved the way for the current trial in people with paralysis.

Previously, Chang and colleagues in the UCSF Weill Institute for Neurosciences mapped the cortical activity patterns associated with vocal tract movements that produce each consonant and vowel. To translate those findings into speech recognition of full words, David Moses, PhD, a postdoctoral engineer in the Chang lab and one of the lead authors of the new study, developed new methods for real-time decoding of those patterns and statistical language models to improve accuracy.

But their success in decoding speech in participants who were able to speak didn’t guarantee that the technology would work in a person whose vocal tract is paralyzed. “Our models needed to learn the mapping between complex brain activity patterns and intended speech,” said Moses. “That poses a major challenge when the participant can’t speak.”

In addition, the team didn’t know whether brain signals controlling the vocal tract would still be intact for people who haven’t been able to move their vocal muscles for many years. “The best way to find out whether this could work was to try it,” said Moses.

To investigate the potential of this technology in patients with paralysis, Chang partnered with colleague Karunesh Ganguly, MD, PhD, an associate professor of neurology, to launch a study known as “BRAVO” (Brain-Computer Interface Restoration of Arm and Voice). The first participant in the trial is a man in his late 30s who suffered a devastating brainstem stroke more than 15 years ago that severely damaged the connection between his brain and his vocal tract and limbs. Since his injury, he has had extremely limited head, neck, and limb movements, and communicates by using a pointer attached to a baseball cap to poke letters on a screen.

The participant, who asked to be referred to as BRAVO1, worked with the researchers to create a 50-word vocabulary that Chang’s team could recognize from brain activity using advanced computer algorithms. The vocabulary — which includes words such as “water,” “family,” and “good” — was sufficient to create hundreds of sentences expressing concepts applicable to BRAVO1’s daily life.

For the study, Chang surgically implanted a high-density electrode array over BRAVO1’s speech motor cortex. After the participant’s full recovery, his team recorded 22 hours of neural activity in this brain region over 48 sessions and several months. In each session, BRAVO1 attempted to say each of the 50 vocabulary words many times while the electrodes recorded brain signals from his speech cortex.

To translate the patterns of recorded neural activity into specific intended words, the other two lead authors of the study, Sean Metzger, MS and Jessie Liu, BS, both bioengineering doctoral students in the Chang Lab used custom neural network models, which are forms of artificial intelligence. When the participant attempted to speak, these networks distinguished subtle patterns in brain activity to detect speech attempts and identify which words he was trying to say.

To test their approach, the team first presented BRAVO1 with short sentences constructed from the 50 vocabulary words and asked him to try saying them several times. As he made his attempts, the words were decoded from his brain activity, one by one, on a screen.

Then the team switched to prompting him with questions such as “How are you today?” and “Would you like some water?” As before, BRAVO1’s attempted speech appeared on the screen. “I am very good,” and “No, I am not thirsty.”

The team found that the system was able to decode words from brain activity at rate of up to 18 words per minute with up to 93 percent accuracy (75 percent median). Contributing to the success was a language model Moses applied that implemented an “auto-correct” function, similar to what is used by consumer texting and speech recognition software.

Moses characterized the early trial results as a proof of principle. “We were thrilled to see the accurate decoding of a variety of meaningful sentences,” he said. “We’ve shown that it is actually possible to facilitate communication in this way and that it has potential for use in conversational settings.”

Looking forward, Chang and Moses said they will expand the trial to include more participants affected by severe paralysis and communication deficits. The team is currently working to increase the number of words in the available vocabulary, as well as improve the rate of speech.

Both said that while the study focused on a single participant and a limited vocabulary, those limitations don’t diminish the accomplishment. “This is an important technological milestone for a person who cannot communicate naturally,” said Moses, “and it demonstrates the potential for this approach to give a voice to people with severe paralysis and speech loss.”

Adaptive and multifunctional hydrogel hybrid probes for long-term sensing and modulation of neural activity

by Seongjun Park, Hyunwoo Yuk, Ruike Zhao, Yeong Shin Yim, Eyob W. Woldeghebriel, Jeewoo Kang, Andres Canales, Yoel Fink, Gloria B. Choi, Xuanhe Zhao, Polina Anikeeva in Nature Communications

A KAIST research team and collaborators revealed a newly developed hydrogel-based flexible brain-machine interface. To study the structure of the brain or to identify and treat neurological diseases, it is crucial to develop an interface that can stimulate the brain and detect its signals in real-time. However, existing neural interfaces are mechanically and chemically different from real brain tissue. This causes a foreign body response and forms an insulating layer (glial scar) around the interface, which shortens its lifespan.

To solve this problem, the research team of Professor Seongjun Park developed a ‘brain-mimicking interface’ by inserting a custom-made multifunctional fiber bundle into the hydrogel body. The device is composed not only of an optical fiber that controls specific nerve cells with light in order to perform optogenetic procedures, but it also has an electrode bundle to read brain signals and a microfluidic channel to deliver drugs to the brain.

The interface is easy to insert into the body when dry, as hydrogels become solid. But once in the body, the hydrogel will quickly absorb body fluids and resemble the properties of its surrounding tissues, thereby minimizing foreign body response.

The research team applied the device on animal models, and showed that it was possible to detect neural signals for up to six months, which is far beyond what had been previously recorded. It was also possible to conduct long-term optogenetic and behavioral experiments on freely moving mice with a significant reduction in foreign body responses such as glial and immunological activation compared to existing devices.

“This research is significant in that it was the first to utilize a hydrogel as part of a multifunctional neural interface probe, which increased its lifespan dramatically,” said Professor Park. “With our discovery, we look forward to advancements in research on neurological disorders like Alzheimer’s or Parkinson’s disease that require long-term observation.”

a A conceptual illustration of the hydrogel hybrid probe design and its application to minimize impact on brain tissue. b, c Fabrication of the hydrogel hybrid probe including thermal drawing of the functional fiber units (b), and one-step direct polymerization of the hydrogel matrix within the fiber assembly ©. Scale bars: 50 µm. d A photograph of the optical waveguide, micro-electrode array, and microfluidic channel fibers. Scale bar: 5 cm. e A photograph of a hydrogel hybrid probe after integration of the hydrogel matrix with a multifunctional fiber assembly. Hydrogel is dyed with Rhodamine B for visual clarity. Scale bar: 1 cm. f, g Microscope images of the hydrogel hybrid probe with a fully swollen (f) and a dehydrated (g) hydrogel matrix. Scale bars: 100 µm. h Optical transmission losses of the PC/COC waveguides within the hydrogel hybrid probes at 0°, 90°, and 180° bending deformation. i Tip impedance of the electrodes within the fiber arrays in the hydrogel hybrid probes at 0° and 90° bending (paired two-sided Student’s t-test: p = 0.5232). j Return rate of the microfluidic channel fibers within the hydrogel hybrid probes at 0° and 90° bending deformation. Values in h–j represent the mean and the standard deviation (n = 6).

Restoring Tactile Sensation Using a Triboelectric Nanogenerator

by Iftach Shlomy, Shay Divald, Keshet Tadmor, Yael Leichtmann-Bardoogo, Amir Arami, Ben M. Maoz in ACS Nano

Tel Aviv University’s new and groundbreaking technology inspires hope among people who have lost their sense of touch in the nerves of a limb following amputation or injury. The technology involves a tiny sensor that is implanted in the nerve of the injured limb, for example in the finger, and is connected directly to a healthy nerve. Each time the limb touches an object, the sensor is activated and conducts an electric current to the functioning nerve, which recreates the feeling of touch. The researchers emphasize that this is a tested and safe technology that is suited to the human body and could be implanted anywhere inside of it once clinical trials will be done.

The researchers say that this unique project began with a meeting between the two Tel Aviv University colleagues — biomedical engineer Dr. Maoz and surgeon Dr. Arami. “We were talking about the challenges we face in our work,” says Dr. Maoz, “and Dr. Arami shared with me the difficulty he experiences in treating people who have lost tactile sensation in one organ or another as a result of injury. It should be understood that this loss of sensation can result from a very wide range of injuries, from minor wounds — like someone chopping a salad and accidentally cutting himself with the knife — to very serious injuries. Even if the wound can be healed and the injured nerve can be sutured, in many cases the sense of touch remains damaged. We decided to tackle this challenge together, and find a solution that will restore tactile sensation to those who have lost it.”

In recent years, the field of neural prostheses has made promising developments to improve the lives of those who have lost sensation in their limbs by implanting sensors in place of the damaged nerves. But the existing technology has a number of significant drawbacks, such as complex manufacturing and use, as well as the need for an external power source, such as a battery. Now, the researchers at Tel Aviv University have used state-of-the-art technology called a triboelectric nanogenerator (TENG) to engineer and test on animal models a tiny sensor that restores tactile sensation via an electric current that comes directly from a healthy nerve and doesn’t require a complex implantation process or charging.

The researchers developed a sensor that can be implanted on a damaged nerve under the tip of the finger; the sensor connects to another nerve that functions properly and restores some of the tactile sensations to the finger. This unique development does not require an external power source such as electricity or batteries. The researchers explain that the sensor actually works on frictional force: whenever the device senses friction, it charges itself.

The device consists of two tiny plates less than half a centimeter by half a centimeter in size. When these plates come into contact with each other, they release an electric charge that is transmitted to the undamaged nerve. When the injured finger touches something, the touch releases tension corresponding to the pressure applied to the device — weak tension for a weak touch and strong tension for a strong touch — just like in a normal sense of touch.

The researchers explain that the device can be implanted anywhere in the body where tactile sensation needs to be restored, and that it actually bypasses the damaged sensory organs. Moreover, the device is made from biocompatible material that is safe for use in the human body, it does not require maintenance, the implantation is simple, and the device itself is not externally visible.

According to Dr. Maoz, after testing the new sensor in the lab (with more than half a million finger taps using the device), the researchers implanted it in the feet of the animal models. The animals walked normally, without having experienced any damage to their motor nerves, and the tests showed that the sensor allowed them to respond to sensory stimuli. “We tested our device on animal models, and the results were very encouraging,” concludes Dr. Maoz. “Next, we want to test the implant on larger models, and at a later stage implant our sensors in the fingers of people who have lost the ability to sense touch. Restoring this ability can significantly improve people’s functioning and quality of life, and more importantly, protect them from danger. People lacking tactile sensation cannot feel if their finger is being crushed, burned or frozen.”

Reward biases spontaneous neural reactivation during sleep

by Virginie Sterpenich, Mojca K. M. van Schie, Maximilien Catsiyannis, Avinash Ramyead, Stephen Perrig, Hee-Deok Yang, Dimitri Van De Ville, Sophie Schwartz in Nature Communications

We sleep on average one-third of our time. But what does the brain do during these long hours? Using an artificial intelligence approach capable of decoding brain activity during sleep, scientists at the University of Geneva (UNIGE), Switzerland, were able to glimpse what we think about when we are asleep. By combining functional magnetic resonance imaging (fMRI) and electroencephalography (EEG), the Geneva team provides unprecedented evidence that the work of sorting out the thousands of pieces of information processed during the day takes place during deep sleep. Indeed, at this time, the brain, which no longer receives external stimuli, can evaluate all of these memories in order to retain only the most useful ones. To do so, it establishes an internal dialogue between its different regions. Moreover, associating a reward with a specific information encourages the brain to memorise it in the long term. These results open for the first time a window on the human mind in sleep.

In the absence of tools capable of translating brain activity, the content of our sleeping thoughts remains inaccessible. We however do know that sleep plays a major role in memory consolidation and emotional management: when we sleep, our brain reactivates the memory trace built during the day and helps us to regulate our emotions. “To find out which brain regions are activated during sleep, and to decipher how these regions allow us to consolidate our memory, we developed a decoder capable of deciphering the activity of the brain in deep sleep and what it corresponds to,” explains Virginie Sterpenich, a researcher in the laboratory of Professor Sophie Schwartz in the Department of Basic Neurosciences at UNIGE Faculty of Medicine, and the principal investigator of this study. “In particular, we wanted to see to what extent positive emotions play a role in this process.”

During deep sleep, the hippocampus — a structure of the temporal lobe which stores temporary traces of recent events — sends back to the cerebral cortex the information it has stored during the day. A dialogue is established which allows the consolidation of memory by replaying the events of the day and therefore reinforce the link between neurons.

To conduct their experiment, the scientists placed volunteers in an MRI in the early evening and had them play two video games — a face-recognition game similar to ‘Guess Who?’ and a 3D maze from which the exit must be found. These games were chosen because they activate very different brain regions and are therefore easier to distinguish in the MRI images. In addition, the games were rigged without the volunteers’ knowledge so that only one of the two games could be won (half of the volunteers won one and the other half won the second), so that the brain would associate the game won with a positive emotion.

The volunteers then slept in the MRI for one or two hours — the length of a sleep cycle — and their brain activity was recorded again. “We combined EEG, which measures sleep states, and functional MRI, which takes a picture of brain activity every two seconds, and then used a ‘neuronal decoder’ to determine whether the brain activity observed during the play period reappeared spontaneously during sleep,” Sophie Schwartz explains.

a In the face game (top), participants had to discover a target face based on a series of clues. For illustrative purposes, all faces are visible here, while in the actual game they were hidden, except for a small disk corresponding to the “torch”, which participants moved using a trackball. In the maze game (bottom), participants had to find the exit of the maze, aided by arrows. b During the game session, participants played blocks of the face (in red) and maze (in blue) games, separated by blocks of rest (in black). During the last block, each participant won one of the games (here the face game is the rewarded game). After a 15-min break, participants underwent the sleep session. EEG-fMRI was recorded during both sessions. Each game recruited specific brain regions (blue frame: maze vs. face game; red frame: face vs. maze game) and the classifier was trained on these data to differentiate between different brain states (5 states in total, see Supplementary Information). Sleep EEG was scored and the classifier was applied to the fMRI from the sleep session to determine the likelihood for each brain state to occur at each fMRI scan during different sleep stages (example for the Reward and No-Reward states shown on the bottom right). N3 indicates N3 sleep stage.

By comparing MRI scans of the waking and sleeping phases, the scientists observed that during deep sleep, the brain activation patterns were very similar to those recorded during the gaming phase. “And, very clearly, the brain relived the game won and not the game lost by reactivating the regions used during wakefulness. As soon as you go to sleep, the brain activity changes. Gradually, our volunteers started to ‘think’ about both games again, and then almost exclusively about the game they won when they went into deep sleep,” says Virginie Sterpenich.

Two days later, the volunteers performed a memory test: recognising all the faces in the game, on the one hand, and finding the starting point of the maze, on the other. Here again, more the brain regions related to the game were activated during sleep, better were the memory performances. Thus, memory associated to reward is higher when it is spontaneously reactivated during sleep. With this work, the Geneva team opens a new perspective in the study of the sleeping brain and the incredible work it does every night.

Molecular motor protein KIF5C mediates structural plasticity and long-term memory by constraining local translation

by Supriya Swarnkar, Yosef Avchalumov, Isabel Espadas, Eddie Grinman, Xin-an Liu, Bindu L. Raveendra, Aya Zucca, Sonia Mediouni, Abhishek Sadhu, Susana Valente, Damon Page, Kyle Miller, Sathyanarayanan V. Puthanveettil in Cell Reports

The brain is wired for learning. With each experience, our neurons branch out to make new connections, laying down the circuitry of our long-term memories. Scientists call this trait plasticity, referring to an ability to adapt and change with experience.

For plasticity to happen, our neurons’ synapses, or connection points, must constantly remodel and adapt, too. The mechanics underlying neurons’ synaptic plasticity have become clearer, thanks to new research from the lab of Scripps Research neuroscientist Sathya Puthanveettil, PhD.

Scientists have learned that synaptic plasticity requires a complex relay from the neuron’s cell body to its dendrite arms and its synapse junctions. Like a 24-hour port and highway network, an internal transportation system of microtubule roads and robot-like couriers shuttle the cell’s vital cargo to its farthest reaches. The transported cargo allows ribosome organelles to assemble, read various RNA instructions, and build new proteins as needed in the dendrites.

In a study, Puthanveettil’s team reports that among the transport network’s courier molecules are two members of the Kinesin family, KIF5C and KIF3A. If KIF5C is knocked out, the team found, the neurons’ ability to branch out dendrites and form input-receiving spines suffers. A gain of function to Kif5C improves these traits.

The study’s first author, Supriya Swarnkar, PhD, a research associate in the Puthanveettil lab, says discerning the details of these processes points to possible causes of neurological disorders, and offers new directions for treatment. Kifs play an important role, she says.

“The ability to form memories depends on the proper functioning of the neuron’s long-distance transport system from cell body to synapse,” Swarnkar says. “And many studies have reported links between mutations in Kifs and neurological disorders, including intellectual disability, autism and ALS.”

Structurally, many of the Kinesin family proteins resemble a walking robot, something from science fiction. They have a platform for carrying cargo, and two leg-like appendages that move back and forth, in a forward walking motion, along microtubules. In fact, they are referred to as molecular machines. These remarkable walking robots move along with their cargo on their back, until they reach their synapse destination and deposit their packages.

There are 46 different kinds of these molecular machines, specialized to carry different types of cargo, Puthanveettil says. Scientists are beginning to learn which Kifs carry which cargo.

Puthanveettil’s team anticipated that KIF5C’s cargo might include various RNAs. Cousin of DNA, which encode genes and reside in the nucleus, RNAs are transcribed from DNA, take its genetic instructions out to the cell’s cytoplasm, build proteins encoded by the genes, and help regulate cell activities. Each different RNA has a different job.

By isolating complexes of KIF5C and their cargo, and then sequencing the RNA, they documented around 650 different RNAs that rely upon the KIF5C courier.

Significantly, this included an RNA that provides the code to initiate protein building, called EIF3G. If it doesn’t show up when and where needed, compounds required for synapse plasticity aren’t made. The ability to remodel the synapse with experience and to learn is impaired, Puthanveettil says.

To better understand the role of the Kifs in long-term memory storage and recall, the team carried out both loss- and gain-of-function studies both in cells and in mice, focusing on the dorsal hippocampal CA1 neurons that are involved in multiple forms of learning.

The mouse studies showed that loss of KIF5C diminishes spatial and fear-associated memory. If KIF5C is boosted in the dorsal hippocampus, on the other hand, memory is enhanced and amplified. The cells showed enhancement of synaptic transmission, arborization of dendrite arms, the neurons’ arm-like extensions, and eruption of signal-receiving mushroom spines. Mushroom spine density is correlated with memory and synaptic plasticity.

Taken together, the research offers new ideas for addressing a wide variety of neuropsychiatric disorders. Intellectual disability, depression, epilepsy, Alzheimer’s disease — anything that could benefit from greater or lesser expression of key proteins in neurons’ dendrites might respond to a boosting or diminishing these molecular couriers, Puthanveettil says.

Modularity and robustness of frontal cortical networks

by Guang Chen, Byungwoo Kang, Jack Lindsey, Shaul Druckmann, Nuo Li in Cell

Recall a phone number or directions just recited and your brain will be actively communicating across many regions. It is thought that working memory relies on interactions between these regions, but how these brain areas interact and properly represent memory has remained a mystery.

At Baylor College of Medicine, Dr. Nuo Li, assistant professor of neuroscience and a McNair Scholar, and his colleagues investigated the nature of the communication between brain regions involved in working memory and found evidence that a modular network organization is critical for persistent neural activity.

Li and his colleagues were able to see that each hemisphere of the brain has a separate representation of a memory. However, the hemispheres are tightly coordinated on a moment-to-moment basis, resulting in highly coherent information across them during working memory.

In their study, the researchers engaged mice in a simple behavior that would require them to store specific information. They were trained to delay an instructed action for a few seconds. This time delay gave researchers the chance to look at brain activity during the memory process.

“We saw many neurons simultaneously firing from both hemispheres of the cortex in a coordinated fashion. If activity went up in one region, the other region followed closely. We hypothesized that the interactions between brain hemispheres is what was responsible for this memory,” Li said.

Li and his colleagues recorded activity in each hemisphere, showing that each one made its own copy of information during the memory process. So how are the two hemispheres communicating?

Li explained that through the use of optogenetics they were able to corrupt information in a single hemisphere, affecting thousands of neurons during the memory period. What they found was unexpected.

“When we disrupted one hemisphere, the other area turned off communication, basically preventing the corruption from spreading and affecting activity in other regions,” Li said. “This is similar to modern networks such as electricity grids. They are connected to allow for the flow of electricity but also monitor for faults, shutting down connections when necessary so the entire electrical grid doesn’t fail.”

In collaboration with Dr. Shaul Druckmann and Ph.D. student Byungwoo Kang at Stanford University, the researchers developed theoretical analyses and network simulations of this process, showing that this modular organization in the brain is critical for the robustness of persistent neural activity. This robustness could be responsible for the brain being able to withstand certain injuries, protecting cognitive function from distractions.

“Understanding redundant modular organization of the brain will be important for designing neural modulation and repair strategies that are compatible with the brain’s natural processing of information,” Li said.

Static Magnetic Fields Dampen Focused Ultrasound–mediated Blood-Brain Barrier Opening

by Yaoheng Yang, Christopher Pham Pacia, Dezhuang Ye, Yimei Yue, Chih-Yen Chien, Hong Chen in Radiology

MRI-guided focused ultrasound combined with microbubbles can open the blood-brain barrier (BBB) and allow therapeutic drugs to reach the diseased brain location under the guidance of MRI. It is a promising technique that has been shown safe in patients with various brain diseases, such as Alzheimer’s disease, Parkinson’s disease, ALS, and glioblastoma. While MRI has been commonly used for treatment guidance and assessment in preclinical research and clinical studies, until now, researchers did not know the impact of the static magnetic field generated by the MRI scanner on the BBB opening size and drug delivery efficiency.

In new research, Hong Chen and her lab at Washington University in St. Louis have found for the first time that the magnetic field of the MRI scanner decreased the BBB opening volume by 3.3-fold to 11.7-fold, depending on the strength of the magnetic field, in a mouse model.

Chen, associate professor of biomedical engineering in the McKelvey School of Engineering and of radiation oncology in the School of Medicine, and her lab conducted the study on 30 mice divided into four groups. After the mice received the injection of the microbubbles, three groups received focused-ultrasound sonication at different strengths of the magnetic field: 1.5 T (teslas), 3 T and 4.7 T, while one group never entered the magnetic field.

They found that the activity of the microbubble cavitation, or the expansion, contraction and collapse of the microbubbles, decreased by 2.1 decibels at 1.5 T; 2.9 decibels at 3 T; and 3 decibels at 4.7 T, compared with those that had received the dose outside of the magnetic field. In addition, the magnetic field decreased the BBB opening volume by 3.3-fold at 1.5 T; 4.4-fold at 3 T; and 11.7-fold at 4.7 T. None of the mice showed any tissue damage from the procedure.

Following focused-ultrasound sonication, the team injected a model drug, Evans blue, to test whether the static magnetic field affects trans-BBB drug delivery efficiency. The images showed that the fluorescence intensity of the Evans blue was lower in mice that received the treatment in one of the three strengths of magnetic fields compared with mice treated outside the magnetic field. The Evans blue trans-BBB delivery was decreased by 1.4-fold at1.5 T, 1.6-fold at 3.0 T and 1.9-fold at 4.7 T when compared with those treated outside of the magnetic field.

“The dampening effect of the magnetic field on the microbubble is likely caused by the loss of bubble kinetic energy due to the Lorentz force acting on the moving charged lipid molecules on the microbubble shell and dipolar water molecules surrounding the microbubbles,” said Yaoheng (Mack) Yang, a doctoral student in Chen’s lab and the lead author of the study.

“Findings from this study suggest that the impact of the magnetic field needs to be considered in the clinical applications of focused ultrasound in brain drug delivery,” Chen said.

In addition to brain drug delivery, cavitation is also the fundamental physical mechanism for several other therapeutic techniques, such as histotripsy, the use of cavitation to mechanically destroy regions of tissue, and sonothrombolysis, a therapy used after acute ischemic stroke. The dampening effect induced by the magnetic field on cavitation is expected to affect the treatment outcomes of other cavitation-mediated techniques when MRI-guided focused-ultrasound systems are used.

Central amygdala micro-circuits mediate fear extinction

by Nigel Whittle, Jonathan Fadok, Kathryn P. MacPherson, Robin Nguyen, Paolo Botta, Steffen B. E. Wolff, Christian Müller, Cyril Herry, Philip Tovote, Andrew Holmes, Nicolas Singewald, Andreas Lüthi, Stéphane Ciocchi in Nature Communications

Fear is an important reaction that warns and protects us from danger. But when fear responses are out of control, this can lead to persistent fears and anxiety disorders. In Europe, about 15 percent of the population is affected by anxiety disorders. Existing therapies remain largely unspecific or are not generally effective because the detailed neurobiological understanding of these disorders is lacking.

What was known so far is that distinct nerve cells interact together to regulate fear responses by promoting or suppressing them. Different circuits of nerve cells are involved in this process. A kind of “tug-of-war” takes place, with one brain circuit “winning” and overriding the other, depending on the context. If this system is disturbed, for example, if fear reactions are no longer suppressed, this can lead to anxiety disorders.

Recent studies have shown that certain groups of neurons in the amygdala are crucial for the regulation of fear responses. The amygdala is a small almond-shaped brain structure in the center of the brain that receives information about fearful stimuli and transmits it to other brain regions to generate fear responses. This causes the body to release stress hormones, change heart rate or trigger fight, flight or freezing responses.

Now, a group led by Professors Stephane Ciocchi of the University of Bern and Andreas Luthi of the Friedrich Miescher Institute in Basel has discovered that the amygdala plays a much more active role in these processes than previously thought: Not only is the central amygdala a “hub” to generate fear responses, but it contains neuronal microcircuits that regulate the suppression of fear responses. In animal models, it has been shown that inhibition of these microcircuits leads to long-lasting fear behaviour. However, when they are activated, behaviour returns to normal despite previous fear responses. This shows that neurons in the central amygdala are highly adaptive and essential for suppressing fear.

The researchers led by Stephane Ciocchi and Andreas Luthi studied the activity of neurons of the central amygdala in mice during the suppression of fear responses. They were able to identify different cell types that influence the animals’ behaviour. For their study, the researchers used several methods, including a technique called optogenetics with which they could precisely shut down — with pulses of light — the activity of an identified neuronal population within the central amygdala that produces a specific enzyme. This impaired the suppression of fear responses, whereupon animals became excessively fearful. “We were surprised how strongly our targeted intervention in specific cell types of the central amygdala affected fear responses,” says Ciocchi, Assistant Professor at the Institute of Physiology, University of Bern. “The optogenetic silencing of these specific neurons completely abolished the suppression of fear and provoked a state of pathological fear.”

In humans, dysfunction of this system, including deficient plasticity in the nerve cells of the central amygdala described here, could contribute to the impaired suppression of fear memories reported in patients with anxiety and trauma-related disorders. A better understanding of these processes will help develop more specific therapies for these disorders. “However, further studies are necessary to investigate whether discoveries obtained in simple animal models can be extrapolated to human anxiety disorders,” Ciocchi adds.

This study was carried out in partnership with the University of Bern, the Friedrich Miescher Institute and international collaborators. It was funded by the University of Bern, the Swiss National Science Foundation and the European Research Council (ERC).

Neuronal diversity is a hallmark of cortical networks. In the hippocampus, distinct neuronal cell-types interact together by selective synaptic contacts and neural activity patterns. We investigate how different forms of emotional and cognitive behaviours emerge within intricate neuronal circuits of the ventral CA1 hippocampus, a brain region instrumental for context-specific emotional memories, anxiety and goal-directed actions. We hypothesize that distinct behavioural programs are implemented by the selective recruitment of micro- and large-scale neural circuits of the ventral CA1 hippocampus. To identify these circuit motifs, we are combining single-unit recordings of ventral CA1 GABAergic interneurons and projection neurons, selective optogenetic strategies, cell-type specific viral tracing and behavioural paradigms in rodents. The results of our experimental approaches will determine fundamental neural computations underlying learning and memory within higher cortical brain regions.

a Behavioural protocol. FC: fear conditioning. CS: conditioned stimuli. b Behavioural data. B6 mice: n = 27; freezing, habituation, no CS: 19.8 ± 2.6%, CS: 26.1 ± 3.3%, beginning of extinction 1, no CS: 18.5 ± 2.3%, CS: 61.9 ± 4.6%, end of extinction 2, no CS: 25.3 ± 3.1%, CS: 34.1 ± 3.4%, blocks (averages) of 4 CSs. One-way repeated-measures ANOVA F(5,130) = 30.8, p < 0.001, followed by post hoc Bonferroni t-test vs. CS group during habituation, p < 0.001. Bar plots are expressed as means ± SEM. Circles are freezing values of individual mice. c Raster plots and corresponding spike waveforms of a representative CEm unit (top). Normalized and averaged population peri-stimulus time histograms (bottom). CEm neurons: n = 15 units from 5 mice; z-score habituation: −0.11 ± 0.45, beginning of extinction 1: 4.21 ± 1.75, end of extinction 2: 1.24 ± 0.48, blocks of 4 CSs. One-way repeated-measures ANOVA F(2,28) = 3.9, p = 0.033 followed by post hoc Bonferroni t-test vs. during habituation, p = 0.023. d Raster plots and corresponding spike waveforms of a representative CEloff unit (top). Normalized and averaged population peri-stimulus time histograms (bottom). CEloff neurons: n = 33 units from 18 mice; z-score, habituation: 0.28 ± 0.33, beginning of extinction 1: −1.53 ± 0.28, end of extinction 2: −0.46 ± 0.34, blocks of 4 CSs. One-way repeated-measures ANOVA F(2,64) = 8.4, p < 0.001 followed by post hoc Bonferroni t-test vs. during habituation, p < 0.001. e Raster plots and corresponding spike waveforms of a representative CElon unit (top). Normalized and averaged population peri-stimulus time histograms (bottom). CElon neurons: n = 55 units from 15 mice; z-score, habituation: 1.30 ± 0.30, beginning of extinction 1: 2.54 ± 0.43, end of extinction 2: 1.40 ± 0.30, blocks of 4 CSs. One-way repeated-measures ANOVA F(2,108) = 5.3, p = 0.006 followed by post hoc Bonferroni t-test vs. during habituation, p = 0.008. All individual neurons of each CEA population had significant z-score values upon CS presentation (4 first CSs during extinction 1). Source data are provided as a Source data file.

Two-dimensional parsing of the acoustic stream explains the Iambic–Trochaic Law

by Wagner M. in Psychol. Rev

Scientists have long known that while listening to a sequence of sounds, people often perceive a rhythm, even when the sounds are identical and equally spaced. One regularity that was discovered over 100 years ago is the Iambic-Trochaic Law: when every other sound is loud, we tend to hear groups of two sounds with an initial beat. When every other sound is long, we hear groups of two sounds with a final beat. But why does our rhythm perception work this way? In a recent study in Psychological Review, McGill University Professor Michael Wagner shows that the rhythm we perceive is a result of the way listeners make two separate types of decisions, one about grouping (which syllables or tones group together) and the other about prominence (which syllables or tones seem foregrounded or backgrounded). These decisions about grouping and prominence mutually inform each other.

The findings may deepen our understanding of speech and language processing, with potential implications in a wide range of areas, including teaching, speech therapy, improving synthesized speech, and improving speech recognition systems.

Researchers found that these rhythmic perceptions are not really about iambs or trochees. For a given stimulus, we make two separate decisions; grouping, or how we parse the signal into smaller chunks, and prominence, or which sounds are foregrounded or backgrounded. Together, these decisions result in our rhythmic intuitions. The two decisions are mutually informative, just like our visual system makes mutually informative decisions about the size and distance of an object. If we think of the object as close by, we infer that it is smaller than if we think of it as far away. This can lead to comical ‘forced perspective effects’, as in this image of the Eiffel tower — we know that it is big and appears small because it is far away, but the girl apparently touching its peak makes it appear small and close by.

The results of the study suggest that it is these kinds of inferences that are the reason why, when listening to a series of syllables like …bagabagaba…, we spontaneously perceive it as repetitions of either the word ‘baga’ or ‘gaba.’ The words simply seem to pop out even though acoustically, it is just an unstructured sequence of sounds. In the case of tone sequences, where we can’t recognize individual words, we simply perceive these effects as a regular iambic or trochaic rhythm.

Spend time outdoors for your brain — an in-depth longitudinal MRI study

by Simone Kühn, Anna Mascherek, Elisa Filevich, Nina Lisofsky, Maxi Becker, Oisin Butler, Martyna Lochstet, Johan Mårtensson, Elisabeth Wenger, Ulman Lindenberger, Jürgen Gallinat in The World Journal of Biological Psychiatry,

If you’re regularly out in the fresh air, you’re doing something good for both your brain and your well-being. This is the conclusion reached by researchers at the Max Planck Institute for Human Development and the Medical Center Hamburg-Eppendorf (UKE).

During the Corona pandemic, walks became a popular and regular pastime. A neuroscientific study suggests that this habit has a good effect not only on our general well-being but also on our brain structure. It shows that the human brain benefits from even short stays outdoors. Until now, it was assumed that environments affect us only over longer periods of time.

The researchers regularly examined six healthy, middle-aged city dwellers for six months. In total, more than 280 scans were taken of their brains using magnetic resonance imaging (MRI). The focus of the study was on self-reported behavior during the last 24 hours and in particular on the hours that participants spent outdoors prior to imaging. In addition, they were asked about their fluid intake, consumption of caffeinated beverages, the amount of time spent outside, and physical activity, in order to see if these factors altered the association between time spent outside and the brain. In order to be able to include seasonal differences, the duration of sunshine in the study period was also taken into account.

Brain scans show that the time spent outdoors by the participants was positively related to gray matter in the right dorsolateral-prefrontal cortex, which is the superior (dorsal) and lateral part of the frontal lobe in the cerebral cortex. This part of the cortex is involved in the planning and regulation of actions as well as what is referred to as cognitive control. In addition, many psychiatric disorders are known to be associated with a reduction in gray matter in the prefrontal area of the brain.

The results persisted even when the other factors that could also explain the relationship between time spent outdoors and brain structure were kept constant. The researchers performed statistical calculations in order to examine the influence of sunshine duration, number of hours of free time, physical activity, and fluid intake on the results. The calculations revealed that time spent outdoors had a positive effect on the brain regardless of the other influencing factors.

“Our results show that our brain structure and mood improve when we spend time outdoors. This most likely also affects concentration, working memory, and the psyche as a whole. We are investigating this in an ongoing study. The subjects are asked to also solve cognitively challenging tasks and wear numerous sensors that measure the amount of light they are exposed to during the day, among other environmental indicators,” says Simone Kühn, head of the Lise Meitner Group for Environmental Neuroscience at the Max Planck Institute for Human Development and lead author of the study.

The results therefore, support the previously assumed positive effects of walking on health and extend them by the concrete positive effects on the brain. Because most psychiatric disorders are associated with deficits in the prefrontal cortex, this is of particular importance to the field of psychiatry.

“These findings provide neuroscientific support for the treatment of mental disorders. Doctors could prescribe a walk in the fresh air as part of the therapy — similar to what is customary for health cures,” says Anna Mascherek, post-doctoral fellow in the Department of Psychiatry and Psychotherapy of the Medical Center Hamburg-Eppendorf (UKE) and co-author of the study.

In the ongoing studies, the researchers also want to directly compare the effects of green environments vs urban spaces on the brain. In order to understand where exactly the study participants spend their time outdoors, the researchers plan to use GPS (Global Positioning System) data and include other factors that may play a role such as traffic noise and air pollution.

(A) Illustration of the data collected from a single subject, (B) cluster in the dorsolateral prefrontal cortex (DLPFC) showing a positive association between grey matter probability and self-reported hours spent outdoors, and © for illustrative purposes only, we plot a line graph depicting the regression of the extracted grey matter values of each subject from DLPFC (right), the y-axis has a break, as indicated by the break symbol.


Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Nature Neuroscience

Science Daily

Technology Networks

Neuroscience News





Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store