2018: a mildly muddled review of the year in neuroscience

Time to torch the textbooks

Mark Humphries
The Spike
12 min readDec 19, 2018

--

Welcome all to the third annual review of the year in neuroscience from The Spike. We’ve made it to the end of 2018. Who saw that coming?

Which means it’s time to take stock, and admire the great strides we’ve made in understanding the brain this year. Pfft. There, done that. Now for the actual review, this year a sampling of three pieces of beautiful or provocative work that shows us we understand less than we thought.

1/ Neurons “transmit” RNA

January started with a bang: Jason Shepherd and his team published a paper showing that messenger RNA (mRNA) is transmitted between neurons. I’ll say right upfront that molecular biology is not one of my strengths. But then neither is elephant juggling, and I gave that a damn good try until the hernia. And the findings in this paper are so outlandish that if you missed it then, you need to know about it know.

It all starts with the Arc gene, and the protein it encodes. We know Arc is needed for learning. Delete the Arc gene in mice and learning is screwed. The Arc gene shows up in those big fishing expeditions for genes related to disorders of learning and development. And for once we even know something about what the Arc protein does — it’s involved in building synapses, possibly by moving AMPA receptors into position.

So far, so genetics. The Arc gene is read-out by a string of Arc mRNA, which in turn specifies an Arc protein, and the Arc protein alters synapses. A lot of mechanism, but no function. That mucking about with a gene that affects synapses in turn affects learning is a bit of no-brainer. We already know a lot about when and why synapses between neurons get weaker or stronger; knowing this gene-to-protein pathway gives us a better understanding of how synapses get stronger or weaker, but doesn’t tell us a lot more about the why or when. As systems neuroscientists we only get out of bed for something that tells us about how neurons talk to each other.

It turns out neurons can talk to each other using Arc. I’m out of bed now.

The Shepherd lab’s paper in January showed that the Arc protein makes a virus-like shell, and inside this shell is wrapped Arc’s own mRNA — the mRNA that encodes the protein itself. This shell is then in turn shoved in the kind of bag — a vesicle — that neurons use to transmit stuff to each other at synapses. Except this type of vesicle isn’t released at a synapse, its simply released from wherever on the neuron’s skin it happened to form.

The big question the Shepherd lab then faced down was: if this bag containing a bit of Arc’s mRNA is released outside the neuron, does it get taken up by other neurons? And if it does get taken up, what does it do? They tackled this in a really elegant way. Take a bunch of neurons in a dish that have no Arc in them, as it’s been knocked-out. Then dump some of those Arc mRNA filled bags into that dish, special ones that have been altered to glow. And watch: do the glowing bags end up inside the neurons? Yes, they did.

And the clincher was when Shepherd’s team then looked inside these Arc-less neurons, now filled with glowing bags, and found huge quantities of the Arc protein. The mRNA had been taken up, and proteins manufactured from it.

The ramifications of this work are potentially huge. For one thing, we have this startling evidence of non-canonical transmission between neurons. But more important is what was transmitted: this is a neuron sending a recipe for how to construct a protein to another neuron. A protein that is heavily involved in controlling learning. We now have evidence that internal instructions to change one neuron’s synapses can be sent to other nearby neurons, and potentially change theirs too. Understanding how neurons learn just got a whole lot more complicated.

Oh, and the same thing happens in flies too.

2/ Who watches the watch-people?

Say I was the annoying kind of father who doled out sweets one at a time in a slow tedious game for my own amusement. I’ve got a bowl of sweets and grapes all mixed together, and a 5 year old who would very much like a sweet, actually daddy. So the game goes as follows — I look in the bowl, and pick something out and show it to the irritated child. With my left hand I always take and show a grape, with my right hand a chocolate; I repeat that pick-and-show a few times to hammer home the message. Then I pick one item in each fist so they can’t see, and ask my 5 year old to choose a hand to open. Which would they pick? Right hand, yes?

But if I then put both hands in the bowl without looking in the bowl, which would they pick? If my 5 year old understood how the world works, then they now know that I was not able to see what I chose — so they should pick a hand at random. Or yell for mummy to get daddy to pack in this idiotic game.

To pick a hand, the 5 year old needs some pretty advanced knowledge of what others know — a model of the world that includes inferring what others themselves know about the world.

Work by Johanna Eckert and colleagues this year showed chimps have exactly this knowledge — about humans.

Each chimp had to deal with two such annoying humans. Two buckets were in view, both mixes of carrots and peanuts; one heavy on the carrots, one heavy on the peanuts. The buckets were transparent, so the chimp could see which was carrot-heavy, and which peanut-heavy. The humans deliberately picked the rare options: Human 1 picked peanuts from the carrot-heavy bucket; Human 2 picked carrots from the peanut-heavy bucket. Each showed these picks to the chimp.

After showing this a few times, and presumably as the chimp grew tired of these annoying humans and their tedious game, then came the test. On some trials, the humans looked in the bucket as they were picking. On other trials, they could not see the bucket, and picked blindly. The chimp could see all this. On each trial it was offered both fists, and asked to choose one. Which did it choose?

The tension here is between the chimp’s knowledge of what is in the buckets, and the knowledge of what the humans know. The chimp knows which bucket is full of carrots (ugh) and which is full of peanuts (woohoo!). But it may also know that Human 1 keeps picking peanuts from the carrot bucket, and Human 2 irritatingly picks carrots from the peanut bucket. So if it knew this, then the chimp should choose the fist of Human 1 and collect its peanut.

Ah, but wait: if Humans 1 and 2 did not look into the bucket, then surely it was more likely that Human 1 now has a carrot and Human 2 now has a peanut, because that’s what their buckets were filled with. In which case, the chimp should choose the fist of Human 2, as they were more likely to have a peanut.

Amazingly, the chimps worked all this out. On trials where the humans looked into the buckets, the chimps chose the fist of Human 1 — the peanut picker — vastly more often than chance. On trials where the humans could not see into the buckets, and picked at random, the chimps chose the fist of Human 2 more often. The chimps’ choices reflected not only their knowledge of the world, but also what they inferred the humans knew about the world. And used this knowledge to adjust the probability of their decisions.

Face it, a chimp is a better statistician than you.

3/ Misplaced childhood

You remember when you were one, charging about the place with a very full nappy (diaper, if you insist), slipped, and landed so hard on your bum that the nappy lost containment and the resulting pooh tsunami covered Grandma? No? Well, that’s because of infant amnesia — we lay down no long term memories of our early childhood.

Infant amnesia seems fairly self-evident: we cannot remember anything about our early childhood, so there must be no memory stored in our brain. Or is it? For what if instead we simply can’t access the memories. Work from Paul Frankland’s lab, led by Axel Guskjolen, has now shown us that actually there may well memories of your infancy still in there somewhere.

Mice can’t remember stuff from their early childhood either. Frankland’s lab showed this by testing a mouse’s memory for fear of a bad place — put a mouse in a special box, apply a mild electric shock, repeat a couple of times. Then next day put it back in that box, and the mouse freezes, remembering the box, what it means, and anticipating a shock.

Do this to adult mice — 60 days old — and that memory lasts for more than 90 days. They freeze just as much if put back into the box after 90 days as they do after one day — even if they’ve never seen the box in the intervening 89 days. They have a clear memory of the bad place that lasts for longer than they’d been alive when they first saw it. Pretty convincing long-term memory.

But do this to infant mice — 14 days old — and all memory of the bad place is gone by 15 days later. Put them back in the box after 30 days, say, and no freezing at all. Is the memory gone, or hidden?

We know that the hippocampus is strongly involved in these kinds of place memories. So it’s a great candidate for finding the memory of the bad place. Frankland’s lab took the elegant approach of injecting into the hippocampus a gene that tags neurons when they are active. The idea here was that neurons laying down the memory during training in the bad place will be most active, so will be the most strongly tagged. The crucial part is that the tagging causes the neurons to express a light-sensitive ion channel. Then if you were to later shine a laser into that same part of hippocampus, the laser will re-activate just those tagged neurons. In theory, re-activating whatever those neurons represent.

Frankland’s lab did just this in their 14 day old mice: tagged the active neurons while training them to learn about the bad place. When they put them back in the box after 15 days, they did not freeze, showing no memory, as expected. But then the laser in hippocampus was switched on: and the mice froze. Just as though re-activating the tagged neurons switched on a lost memory of the bad place.

As with all good scientists, the team did a lot of control experiments to make this convincing. They turned the laser on without tagging the neurons, and no freezing. They tagged the neurons, but only turned the laser on in other locations, like their cage, not the special box: and no freezing. This control is actually very important. Activating so many neurons in hippocampus at the same time could cause an epilepsy-like absence seizure, where the mice would freeze in place. But as the freezing was only in the bad place, and not wherever the laser was switched on, that’s pretty convincing evidence that freezing in the bad place wasn’t just a seizure.

Activating the tagged neurons still worked after 30 days between training and testing. It worked after 60 days. An infant mouse’s memory of what happened in the bad place could be turned back on at will. It was there, but they could not access it. Which opens up the slightly worrying idea that infant amnesia is not the erasure of memory, but the hiding of memory.

Plan S

If we’re going to discuss science in 2018, I suppose we have to mention Plan S. A bold plan to get published works from across the European Union freely and immediately available for all to read. And to do get the plan launched in a mere couple of years. A worthy idea, but one that caused no end of arguing.

From one perspective, this is a long-needed action, whether you believe that science paid for by taxpayers should be available to those taxpayers, or that the vast profits of scientific publishing houses are obscene. From another perspective, this is a draconian laying down of the law, with a narrow, muddled view of what constitutes open-access (no pre-prints, no free access with a short delay), and little thought for institutions that depend on revenue from publishing journals for their existence (such as learned societies). Who is right?

Everyone, of course. We need a Plan S of some kind; the version we got was not thought through well enough before being announced. Prioritising paid-for open-access over all others risks giving more, not less, power to established publishers. And I saw strangely little discussion of the UK’s long experience with our own version of Plan S: we’ve had mandated gold open-access publication of work funded by our Research Councils since 2014, when a central fund was established to pay for it (and the Wellcome Trust made a similar mandate with their own money). Each university is given its share of this central fund each year, with a simple mission: pay for each paper funded by the Research Councils to be published “open to all”.

Result? With no caps on how much journals will charge to publish an open-to-all paper, this costs an absolute fortune . The funds held by each university are rapidly run down each financial year. To stem the flood, some universities put their own local rules in place about what kinds of paper will qualify for funding (e.g. no funds for hybrid journals) so there are big inconsistencies between universities in how they implement this apparently simple policy. Worse, some universities simply run out of funding, and turn down the mandated requests for pay for papers. In short, an expensive mess.

I look forward to the Plan S architects answering the simple question: if implemented, where will people publish when the money runs out?

Hey, 2018 wasn’t all bad.

We had Peter Dayan and Demis Hassabis elected as Fellows of the Royal Society, in recognition of their ground-breaking work in artificial intelligence and neuroscience. DeepLabCut brought automated, general purpose movement tracking to the masses.
We got compelling evidence that the tiny groups of mid-brain neurons housing the brain’s key neuromodulators are home to extraordinary diversity, in a flurry of papers within what seemed a week of each other, on serotonin, on dopamine, and on noradrenaline.

Hugo Spiers and colleagues showed us that the difference in navigation ability between the men and women of a country correlates rather stunningly well with the level of gender inequality in that country: the more unequal the genders are treated, the bigger the gap in the ability to navigate. To the extent that countries with minor gender inequality — your Norways and your Finlands — show no difference in navigation ability between men and women.

And it would be remiss of me not to mention that The Spike itself had an exciting year, as it morphed from one-man show to a platform for a rich variety of voices in systems neuroscience. Some highlights include

  • Ashley Juavinett’s ongoing series of advice for choosing and getting a PhD in neuroscience (and watch out for the forthcoming book!)
  • Kelly Clancy’s gem on why simple explanations in biology are untrustworthy
  • And what accidentally turned into a three-part deep-dive into brains as computers: me on why the “brain as a computer” is a theory, not a metaphor; Blake Richards on why it’s not just a theory, but a logical inevitability that brains are computers; and Corey Maley on why analog computing may be a far better hardware metaphor for the brain.

Wait, what’s this? December brought us an issue of Nature with a really strange neuroscience paper. A paper about the role of hippocampus in memory. Unlike most strange papers, this one was strange for what was not in it. No flashy genetics; no tricksy optogenetics to make neurons do unworldly things; no DREADDs to control specific neurons with designer chemicals; no Neuropixels or calcium imaging to record hundreds or thousands of neurons; no unit recording of any kind, in fact. Just behaviour, chemical lesions for causality, and EEG/LFP to track sleep states. Like something from the late 80s. As were the statistical analyses (seriously Nature, bar charts with single sided error bars in 2018?).

But it made an interesting case for scientific insight, and there it is, in Nature. Freak occurrence, or turning point to prizing scientific insight over flash? Onwards to 2019 to find out. See you there!

Want more? Follow us at The Spike

Twitter: @markdhumphries

--

--

Mark Humphries
The Spike

Theorist & neuroscientist. Writing at the intersection of neurons, data science, and AI. Author of “The Spike: An Epic Journey Through the Brain in 2.1 Seconds”