Your heart skips a beat and your head jerks around.
What was that?
A twig snapping during a solitary forest hike causes us to be on high alert. Our brains go into modes of fear and preparation because we recognize that an unexpected sound can be followed by danger. The sound itself is not dangerous, of course, and a predator may not show itself for some time afterward, if at all, but our brains still associate the twig snapping with potential consequence.
We assume these reactions to be natural and instinctive, but according to Dr. Marieke Gilmartin, they’re actually a learned behavior, crucial for human adaptation to our world.
Gilmartin, assistant professor of biomedical sciences in MU Health Sciences, received an $800,000 grant from the National Science Fdn to study how such associations are created and retained, specifically when there is a gap in time between the cue and the consequence.
“In most responses of this kind, sensory cues and outcomes happen at the same time,” Gilmartin says. “We see a red-hot stove burner, we touch it and we immediately get a burn. That teaches us that stoves can be hot and potentially dangerous. We know a lot about how the brain puts these two things together. But we still don’t know how the brain puts together cues and outcomes that are separated by a length of time.”
Like Pavlov’s dogs
In her lab, Gilmartin uses a conditioning technique, similar to one made famous by Ivan Pavlov and his dogs, in order to test her learned-behavior hypothesis that there are two key brain structures involved in this learning process, in addition to the amygdala or fear center. A specific tone is followed by a light shock administered to rat subjects. The delay between tone and shock is only seconds, but it’s long enough to initiate a brain response that differs from the immediate form of conditioning. Eventually, the subjects associate the tone with the shock the same way we associate a snapping twig with danger.
“We expect to see activity in the amygdala, since this is where the brain processes fear,” Gilmartin says. But in their experiments, her team found that the prefrontal cortex — the thinking brain — was activated. And in longer intervals between cue and outcome, so was the hippocampus, which helps process emotions and memories.
This activity in these additional brain structures suggests a fundamental process for learning how to predict a future event amid certain cues — it’s a preparatory state in which the brain is actively learning that a cue leads to an outcome. In this case, a tone leads to a shock.
Gilmartin says that in this temporal gap conditioning, all three brain structures prove essential. Using a technique for manipulating neuronal activity called optogenetics, she can stop activity in one structure or in the specific connections between them to test their involvement.
“If we shut down any one of the three structures — the amygdala, hippocampus or prefrontal cortex — during the cue and outcome phase, we observe that the behavior is not learned,” she says. “Only when all three are working together do we see memory retention.”
In addition to her NSF grant, Gilmartin also received $225,000 in funding from the Whitehall Foundation, and a smaller Marquette research grant. All three grants will help her lab focus on how the brain creates and retains memory, which could provide insight into the cognitive deficits observed in mental illnesses, like addiction, depression and post-traumatic stress disorder.
“We’ve observed that manipulating prefrontal activity during cue-outcome learning can change the fear memory,” says Gilmartin, providing an example. “We’d like to explore the idea that PTSD may tap into similar mechanisms, promoting an inappropriate, enhanced fear response. Additional funding will allow us to explore these additional questions and observations that arise from our core research.”
What is Optogenetics?
The short answer is: technology that gives scientists unprecedented control over the brain, allowing them to “turn on” or “turn off” neurons in real time using light. With roots in genetics, biology and engineering, it helps researchers study complex behaviors at the cellular level. Here’s how it works:
1. A light-sensitive gene is packaged inside a virus and delivered directly into the brain.
2. Received into neurons, the gene generates a unique channel protein that resides in the cell membrane.
3. The channel stays closed unless a pulse of blue light is delivered to the brain tissue via a fiber- optic implant. Then the channel opens, causing the neuron to turn on. Other types of channels are available to turn off the neuron.
4. This mechanism works like a light switch controlling neuronal activity to the millisecond — the same amount of time neurons use to communicate with each other.