2022: A Review of the Year in Neuroscience

A paean to being boring

Mark Humphries
The Spike


Credit: Pixabay

Can we go back to boring now? No wars launched by a slightly unhinged despot, no global inflation, energy crisis, or ongoing pandemic. No morally-bankrupt World Cup for which the usual euphemism of “questionable human rights record” doesn’t quite do justice to the thousands of workers who died building the carbon-spewing stadia, transport, and facilities in a desert. And no UK led in succession by a self-absorbed, morally dubious sack of straw, a cosplay Margaret Thatcher with no impulse control, and the richest person in the entire House of Commons. (Although admittedly the last of these, Rishi Sunak, is reassuringly, almost cutely, boring: he volunteered — even as Prime Minister — to do the modelling spreadsheets for the recent budget; and used to dress up in Star Wars costumes for non-uniform day at school). Boring is so underrated.

Neuroscience could do with being a bit more boring. Slowing down a bit, giving us all a chance to catch up. A chance to take it all in. But the torrent shows no abating. So here are some shiny pebbles I’ve plucked from the raging waters, to hold up to the light for you. That on closer inspection the shiny pebble might turn out to be a slime-covered, half-rotten bit of old wood is a risk I’m willing to take on your behalf.

So let’s plunge in. We’ve got technological advances, from a neuroimaging technique that can allegedly see the spikes coming out of neurons in the living human brain to tantalising steps towards neural activity recorded in a protein (yes, “in”). And we’ve insights into how the brain copes when things are less than optimal: when you’re hungry; and when some of it is missing.

Seeing spikes in fMRI

fMRI has for decades been the best way of looking at the activity of the intact human brain. But its limitations are frustrating: it’s a proxy, reporting the flow of oxygen-rich blood that is presumably demanded by active neurons, not the neurons’ activity itself; it’s blurry, reporting on chunks of brain that hold up to 100,000 neurons in a normal scanner, and still 1000s of neurons in the strongest scanners; it’s slow, as the blood signal it measures changes over seconds, but neurons fire with millisecond timing so their activity patterns are invisible. This year the teams of Jang-Yeon Park and Jeehyn Kwag reported in Science a type of fMRI signal that, if real, can overcome all of these issues: it directly reports the activity of neurons, at much finer scales of time and space.

They showed some compelling evidence. It’s a signal whose peaks aligned with the peaks of spiking activity in a mouse’s cortex after its whiskers were tweaked, and those peaks of activity were just 25 milliseconds after the tweaking. The signal and spiking also aligned well when directly driving neurons to spike using optogenetics. And the signal could resolve different peaks of spiking activity in thalamus and cortex separated by 15ms or so. Roughly, the signal they found changed on the order of 10ms, and could resolve down to 0.22mm. Still thousands of neurons, but it would seem churlish to complain.

Some caveats are worth noting though. For one, there is no firm idea yet of where this signal is coming from: of exactly how the changing voltage of hundreds or a few thousand neurons can affect the magnetic field strongly enough to be detected. It also needs an absurdly powerful scanner (9.4T). For another, it requires repeated trials to build up the signal, so it can’t be used to look at spontaneous activity, or ongoing behaviour. And as a consequence it requires the subject in the scanner does not move at all. Preferably anaesthetised. A dead fish would be fine, for example.

But if it all holds up then, wow, what science could be done.

A ticker-tape parade for neurons

Back in the realm of animal neuroscience, recording the activity from lots of individual neurons is old hat. Decades now, of electrodes and microscopes implanted in brains to capture all that juicy firing. No, the science-fiction scenario for us is a way to record the activity of neurons for weeks or months, across the entire brain, without having to stick anything in it. To track development, learning, ageing; to let the animals behave as they want, free from the constraints of dull lab tasks — watch these moving dots, push this button, pull this lever — and capture their brain’s activity while they do it. And one of the blue-sky ideas was the molecular ticker-tape.

The ticker-tape is a thought experiment of using some kind of growing molecule as a way of time-stamping events in a neuron, then sequencing that molecule at the end of the experiment to recover the order of events: most usually, some kind of DNA. Now a preprint from Adam Cohen and friends has shown us a proof-of-principle for this ticker tape idea. They found a protein that can grow over time at a constant, slow rate inside mammalian cells, and happens to have pores in it that are the right shape to fit a fluorescent marker for neural activity. They showed that the protein not only grows in neurons, but when activating the fluorescent markers at different time points, that sequence of events could be read out simply by checking where along the protein the marker glowed. And they showed that the combination of growing protein and fluorescent marker could encode c-Fos expression, a gene quickly and transiently transcribed when a neuron has sustained activation. These are baby steps: the fluorescent markers were activated by dyes, not by neural activity; the time resolution was 2 hours; and c-Fos was explicitly activated by a drug in cultured neurons. Still, a first step!

And not without precedent: at the same time as molecular ticker-tapes for activity were being dreamt up, so was the equivalent molecular machinery for finding connections between neurons. A thought experiment of taking a unique barcode of DNA, injecting it into each neuron, then slicing up the brain and sequencing the DNA to find where each neuron sent its axons. Now, that’s reality: Tony Zador’s team developed barcoding RNA for neurons, and have since used it to trace the detailed connections of single neurons in visual cortex. So if barcoding for connections has gone from thought experiment to a fully-realised process in a decade or so, might we see the ticker-tape before the decade is out? Watch this space.

What happens when the brain is less than optimal?

I don’t know about you, but being hungry makes my brain a little fuzzy, hard of thinking — attempting to put socks in the dishwasher hard-of-thinking. This is perhaps not surprising. Your brain is less than 2 percent of your mass but burns about 20 percent of your energy every day. So if there was one organ that could do with conserving energy, it’s your brain. And for it to conserve energy, the best place to start would be make its neurons less active, for it is in the sending and receiving of spikes that the brain burns through much of its energy. Nathalie Rochefort and her lab asked just that question: does a brain’s processing get altered by hunger?

Yes, was the answer. They took some hungry mice and some full-up, happy mice, and recorded how neurons in their visual cortex responded to different, boring pictures of straight lines all at the same angle. Neurons in these bits of cortex tend to each send the most spikes to lines at a particular angle, like 30 degrees or 170 degrees; and the further the angle of the lines is from this preferred angle, the fewer spikes they send. In full-up, happy mice, this meant that a neuron wouldn’t respond at all to lines far from its preferred angle; but in hungry mice, it did. Which means that the information the neuron was sending about the world indeed got fuzzier.

How do fuzzier neurons save energy, you may wonder. What seemed to be happening is that neurons in the hungry mice reduced the flow of current in response to each input they received, so saving energy out in its big, metabolically-expensive dendrites. But at the same time the neurons increased the resistance of their membrane, so increasing the change in voltage caused by the input currents. The result was a noisier voltage at the neuron’s body — and a noisier voltage means a higher chance of a spike being sent when it should not have been. Hence: neurons sending spikes to lines they should not have been responding to.

An interesting side-note: doesn’t this flatly contradict efficient coding theories?

These theories propose that brains minimise redundancy between neuron firing, in order to maximise information transmission for the energy available. In other words, have as few neurons as possible send the same message at once. This would seem to predict that as available energy reduces so neuron firing becomes more sparse. But the Rochefort lab paper shows that when energy reduces, neuron firing does not become more sparse, but more noisy, as their tuning becomes broader. Which implies that, for any given picture, more neurons are firing at the same time, not fewer!

Three possibilities: I’m wrong (seems likely); the brain is already at the limit of efficient coding, so any drop in available energy leads to a drop in coding efficiency (maybe); or efficient coding theories have (mostly) focussed on the wrong end of the neuron — of how to make their output more efficient, when it is the input that uses the most energy, and so it is the input which needs to be optimised. Over to you, theorists.

The really sub-optimal brain

A hungry brain might be suboptimal, but missing half of it is surely worse. As you may know, recognising written words is mostly handled by your left cortex; the right cortex tends to handle recognising faces. The current hypothesis is that face recognition is initially handled by both sides, but learning to read drives a competition for cortical real-estate, so word recognition ends up on the left (where language is), and faces predominantly on the right (where language isn’t). Which suggests this left/right split of words and faces is pre-ordained. To test this you’d want to ask what would happen if there was only one hemisphere to begin with: is the brain plastic enough to cope with that?

Marlene Behrmann and team were able to ask just this question, by studying a group of people who’d had one hemisphere removed in childhood. This may sound drastic, and it is — it’s a last resort treatment for intractable, severe epilepsy, restoring some quality of life by cutting out all the tissue which generates the seizures. Behrmann and co could ask of them: how well could they recognise previously presented words or faces? If words and faces really were pre-ordained to be left and right hemisphere, then if one was missing then presumably recognition would be terrible. But it wasn’t.

Sure, those missing half a cortex couldn’t quite match the accuracy of people with intact cortices, but they still got over 80% correct on average. And did so in both face and word recognition, irrespective of which hemisphere was missing. So there is no pre-ordained mapping of word or face recognition to the left and right hemispheres. And both face and word recognition can happen, and happen well, in one hemisphere alone. Remarkable thing, the brain.

Lots more to catch up on. We’ve had brain charts for the human lifespan, of how it grows and evolves and shrinks with age, a paper notable for the lead authors’ extraordinary effort in gathering the necessary data together. You may remember me banging on about “dark” neurons, the rather large set of neurons in the cortex that don’t seem do anything. Well, it turns out that ketamine causes inactive neurons to become active, and vice-versa. So the weird dissociation effects of Special K may simply be down to the dark neurons sparking to life, sending spikes about stuff that isn’t out there in the world. And it turns out the reason urbanite North Americans get lost in European cities is that they were raised in grid cities — so their brains can’t cope with the concept of streets that bend, or meet at any angle other than 90 degrees.

And finally. Researchers at Facebook revealed the best thing people in poverty could do to improve their lot: make better friends. I wish I was joking. Looking at 72.2 million Facebook users in the USA, they found the best predictor of increased income was how many better-off friends a person had. Which led them to a simple solution for solving poverty, of creating more opportunities for people on low incomes to make friends with people on higher incomes. So that’s world poverty solved. See you next year, when Facebook researchers will discover that the best predictor of people having enough food is their income, and propose that we give people enough money to buy stuff.

psst… my book all about how neurons passing spikes makes seeing, thinking, and doing happen is out in paperback on Jan 24th 2023.

Other than that, please, 2023: be more boring.

Twitter: @markdhumphries

Enjoyed this story? Then consider signing up to become a Medium member: $5 a month gives you unlimited access to all stories on Medium, and supports all their writers. If you sign up using my link, I’ll earn a small commission: https://drmdhumphries.medium.com/membership



Mark Humphries
The Spike

Theorist & neuroscientist. Writing at the intersection of neurons, data science, and AI. Author of “The Spike: An Epic Journey Through the Brain in 2.1 Seconds”