2019: a lightly bamboozled review of the year in neuroscience
I think we all need a little lie down
Wow, who’d have thought 2019 would be the year we cracked the brain? Feet up, job done everyone. No doubt you, like I, was stunned when it turned out the zona incerta was the key to unlocking everything. That innocent bundle of neurons, tucked up neatly betwixt the thalamus and the brain’s basement, revealed as the nexus through which all action and perception flow.
(But then we always suspected the cortex is a mere blanket for regulating the temperature of the crucial parts of the brain. And while some continue to protest that “the folds of human cortex are what makes us unique!”, they are denying the simple, near universally-accepted explanation — greater surface area improves heat dissipation.)
Welcome one, welcome all to the fourth annual review of the year in neuroscience from The Spike. The above is all fantasy, of course. (Except the bit about the cortex). But some key areas of neuroscience have seen an explosion of exciting and provocative work this year. So come with me as we take a tour of the three that caught my eye: the specialness of the human brain; the sci-fi world of filming voltage; and the mind of the worm laid bare.
1. “What’s so special about being human?”
What’s so special about the human brain? A question broad and deep, one that has obsessed thinkers since Antiquity. Our once god-like status, apart and above from all animals, now steadily eroded by science, from natural selection placing us as but one species evolved from and in parallel to many others, to genetics putting the boot in by showing that we share 90 percent of our DNA with cats. Including my cat Bob, and he constantly falls off the back of the sofa.
You’d have thought neuroscience would have been all over the question of what’s special about the human brain, what with it being the study of the brain and all. But what we could do has been very limited. Largely we have only been able to observe behaviour, stuff we can do that other species cannot: talking endlessly in complex grammars, voting for bell-ends, that sort of thing. And supplement these observations with crude measures of the brain’s structure — how many neurons it has, which bits are thicker or larger than others, which bits are folded — compare them to other species, and going “ooo look it’s different”.
But to understand why the human brain is “special” we need some kind of theory as to why any of those crude brain differences would make any contribution to that specialness. For a start, to know what’s different about the neurons themselves: what’s different about the types of neurons that exist, or the signals they send, or both. Which is the preserve of us “systems” neuroscientists. Yet systems neuroscience hasn’t had much to say about this question, because of the deep problems of measuring neurons in humans. Until, that is, this year.
A. Special codes
In a brave paper, Pryluk and colleagues attempted a direct comparison of how the code used by neurons differed between humans and monkeys. They took long recordings of single neurons from the amygdala and cingulate cortex of monkeys. And compared them to similar recordings from the same regions in humans. These human recordings are ultra-rare: they came from patients with epilepsy that was both so serious and so unresponsive to drugs that they were being prepped for surgery to remove the part of the brain causing the seizures — and to find that part, they had electrodes implanted for a week or more. And while these electrodes were in there, and while laying in their hospital bed, the patients graciously agreed to do a series of tasks for the experimenters.
With these precious data to hand, Pryluk and friends asked a straightforward question: how much information are these neurons sending? In practice, this was a tough question to ask, as there are all sorts of things to compensate for, like correcting for differing firing rates across neurons and between species. But if we believe their measurement of how much information a neuron sends, their end result is clear. Human neurons in both the amygdala and cingulate cortex send more information — in that they are closer to the maximum possible rate of information sending — and do so more efficiently : they send fewer spikes for the same amount of information. Which means? Who knows. But their results point to human cortex having an increased capacity, so that much more information can be represented across a population of neurons, but at the cost of less robust coding — if fewer spikes are used, so the message being transmitted is more sensitive to failure and noise. And as know, the human brain is very sensitive to failure.
B. Special neurons?
While the Pryluk paper hinted at something special about how the human cortex encodes information, it didn’t tell us anything about whether this is because the types of neurons are special to humans. We get much of our deep understanding of types of neurons from mice, thanks to their being the workhorse of genetics (a good thing that the actual workhorse is not the workhorse of genetics, otherwise Janelia Farm would be, literally, a farm. And about 10000 square miles in size). Hence “what’s so special about the human brain?” translates in genetics to: how do we differ from mice?
A Nature paper from the Allen Brain Institute tackled this question head-on by directly comparing the gene expression between the cortex of the human and mouse. To do that, they first had to solve the small problem of accurately sequencing the RNA-expression of single neurons in the human cortex. Having cracked that, they then grouped all their neurons into types according to the similarity of their expressed genes. The result? 69 different types of neurons in the human cortex, of which 24 are excitatory (as in, they expressed glutamate) and 45 are inhibitory.
So which of these 69 types of neuron are unique, are responsible for endowing we humans with our special brain thinking stuff? None. All 69 are also closely matched in the mouse cortex. The major difference is not the type of neuron, but where they are found. In mice, all 24 types of excitatory cell stick to one particular layer of the cortex. In humans, many of those same excitatory types appear in more than one layer of the cortex. And why would this make the human cortex special? Who knows.
Yet their results are strong confirmation of what we all know: evolution is a tinkerer. We may have diverged from rodents about 90 million years ago, but in something as complex as the mammalian brain, in which genes define merely an outline of the details of the adult brain, big changes will almost always be catastrophic. So the main difference between the neurons in the mouse and human cortex is not in the proliferation of brand new types of neurons, but in the repurposing of what is already to hand. Oh, and the fact that mice have about 10 millions neurons in their cortex, and we have about 17 billion.
C. Dendrites predict intelligence?
We like to think humans are the most intelligent species on the planet, in the face of overwhelming evidence to the contrary. A provocative paper at the very end of 2018 asked what it is about our neurons that makes us intelligent. And the answer is: the more complex the dendrites of pyramidal neurons in our cortex, the higher our IQ. Wow!
Well, maybe. Such unusual claims deserve close scrutiny. After all, extraordinary claims demand extraordinary evidence. As we have no theory which predicts that more complex dendrites have anything to do with intelligence, so we need some pretty compelling evidence to believe this is not just happenstance correlation. The researchers obtained another rare sample of human cortical neurons: in this case, from bits of the temporal lobe removed during brain surgery, placed on ice, then popped into the experimental set-up as soon as practically possible while the neurons still lived. They took a range of measurements from the neurons. Each patient took an IQ test. And the researchers correlated some of the measurements with the IQ scores. Why these measurements? No reasons given — so already alarm bells are sounding about what other measurements were correlated with IQ, found to be lacking, and omitted from the paper.
Is the evidence compelling? No. The key evidence is the correlation between the total length of the dendrites and the IQ of the patient. Namely, this figure:
Leaving aside the fact that this is the best correlation they have, and it is still weak (explaining 26% of the variance), take a closer look. Each data-point is a patient, so the value for the length of the dendrites is an average over the measured neurons in that patient. Now you don’t have to be a neuroanatomy geek to know that pyramidal neurons come in a bewildering variety of shapes and sizes, so averaging over them is a bit…. Well, charitably we’d call it weird. More bluntly, meaningless. And I’ve just told you that human cortex contains about 24 types of excitatory neuron, and most of those are some kind of pyramidal neuron. This correlation contains just 72 pyramidal neurons in total. So it hideously undersamples the diversity of pyramidal neuron dendrites in human cortex.
Worse, the above figure and others in the paper are textbook examples of how not to compute a correlation. The correlations are computed using averages — without taking into account how wrong those averages might be. And they could be so wrong that the correlation disappears completely. Indeed, looking at the range of data variation (the error bars) in the above figure, I’d wager the correlation would indeed disappear if tested properly (more on this in the Appendix below).
Finally, a simple thought experiment. These neurons happen to come from the temporal lobe of the cortex, a region plausibly involved in some kind of “thinking” that might contribute to an IQ score. But that was just because these patients had epilepsy, and the temporal lobe usually contains the region that starts epileptic brain activity. But what if these samples had been from primary visual cortex (V1)? They’d find the same diversity of sizes of pyramidal neuron dendrites, because types of pyramidal neuron are largely consistent across the cortex. But if they’d reported a correlation between the size of dendrites in V1 and a person’s IQ score, who would have taken them seriously?
2. Voltage imaging explodes
People often say to me “Mark, you Nietzschian ubermensch, what’s the next big thing in systems neuroscience?” And I reply: “well, Mum, since you asked: it’s voltage imaging”.
To know the brain, we want to know what messages big groups of neurons are sending, and what they are receiving. Voltage imaging is the solution: the filming of neurons as they glow in proportion to their membrane voltage, a real-time readout of the detailed electrical activity of every neuron in a population. If we can get it to work in mammalian brains, it will be a mind-blowing tool for understanding the signals neurons use — not recording just every spike they send, but also the spikes they receive.
Indeed I’ve been banging on about voltage imaging being the way forward for understanding neurons for some time (like here in 2017). That’s in part because I’ve had the rare privilege of working with voltage imaging data from the gorgeous sea-slug Aplysia since 2011, thanks to Angela Bruno and Bill Frost. So this year was, for me, deeply exciting.
We had one, two, three major papers all announcing new types of voltage sensors that work beautifully in mammalian neurons. That last for ages, have big signals, and can all record detailed voltage traces from multiple neurons at once.
Why is this huge? I wrote a whole piece about why: but the key idea is simple. Voltage imaging combines the strengths of calcium imaging and recording with electrodes, while solving their problems. With calcium imaging we can see neurons, know neurons, and tag specific neuron types to record from, but calcium itself, the thing being measured, is a slow and indirect measure of spikes. With electrodes we get fast, direct measurements of spikes, but don’t know which neurons or exactly where or what types. Voltage imaging gets all of that: we can see the neurons, know the neurons, tag specific types of neurons, and still record spikes quickly and directly. And more: because we can see not just spikes, or things that are proxy for spikes, but also the voltage changes between spikes — the receiving of inputs!
Researchers of invertebrate nervous systems have been making major breakthroughs using these sensors — in dye form — for decades. So with voltage imaging about to become a working reality in mammals, the mind boggles about the major breakthroughs that beckon.
3. Mind of the worm, complete
For the last 30 years, waggish commentators on neuroscience have defaulted to the slogan “but we’ve known the complete wiring diagram of C Elegans brain since 1986, and we still don’t know how the mind of the worm works!”. Sometimes this is pointed at theorists’ lack of progress. Or at the pointlessness of doing neuroscience in advanced animals. Often it is pointed by some group of neuroscientists at another group of neuroscientists. Whomever it’s pointed at, we’re all meant to hang our heads in shame, and contemplate the meaningless of our research, nay the very purpose of studying the brain.
Well, we have news for these people. The wiring diagram was never complete. Not even close.
The White et al paper in 1986 was a staggering, decade-long effort to map the nervous system of the tiny nematode worm C Elegans by hand. But was clearly incomplete. It mapped 279 of the 302 neurons in the hemaphrodite of the worm; and evidently missed an unknown number of connections between even them. Chklovskii’s lab updated the wiring diagram in 2011 with some of the missing connections.
This year, we finally got something close to complete: a detailed mapping of the wiring diagram in both sexes of C Elegans (with free pull-out and keep poster in Nature!). All 302 neurons in the hermaphrodite; and 385 neurons in the male. And the 6334 connections in the hermaphrodite, and 7070 in the male — including which motorneurons connected to which muscles. And a new type of information too, the strength of the connections, given by the size of the synapse of one neuron onto another. Phew. Epic effort by Cook et al in the Emmons lab.
So after a detente of a few years, can we now expect the wags to return with “but we’ve known the complete wiring diagram of C Elegans brain since 2019, and we still don’t know how the worm works!” and for us to take them seriously? Nah. For a start, this wiring diagram is still not technically complete. There are differences between the sexes in the same neurons, so the study possibly missed some connections. Some connections between neurons were not detected but assumed to be there due to “repetitive” wiring. Worse, the wiring diagram is still not a single animal, but a mosaic reconstruction from multiple animals. So individual variations due to the happenstance of development will have been mixed together.
Indeed, this gargantuan effort is also a case-study in arguing about the usefulness of connectomics. For what did we learn? Not a great deal to be honest. Just like the sequencing of the human genome, the value of this updated wiring diagram will be in how it’s used, not in it’s mere existence.
For starters, we can all look forward to a swathe of by-the-numbers network theory studies now, where this new wiring diagram is analysed to death for its modularity and topology and wiring efficiency and core-periphery and spatial embedding and all the other stuff. More interestingly, people will need to take another look at the dynamics the wiring diagram imposes. Most obviously, the high-profile work of Laszlo Barabasi’s team who used “network control” to predict which neurons are crucial to locomotion in C Elegans based solely on the wiring diagram, and then had those predictions confirmed by ablating those neurons. With a new, more complete wiring diagram now to hand, presumably someone is checking that the network control theory makes the same predictions about which neurons are crucial — for if it doesn’t, then the whole idea sinks.
Indeed, having laid out their brand new wiring diagram, Cook et al seem to take dim view of this line of work. In a not-so-subtle admonishment to 20 years worth of work, they conclude their paper with “modelling the functions of the nervous system at the abstracted level of the connectivity network cannot be seriously undertaken if a considerable number of nodes or edges (for example, edges that represent electrical couplings) are missing.” In a scientific paper, that’s about as harsh a burn as you’re ever going to see.
But, hey, we also got further evidence this year that angry spats about wiring are all a waste of time. We could instead just ignore the individual neurons. The pioneering work of Manuel Zimmer’s lab in 2015 showed we could record tens of neurons at the same time in the nervous system of a freely-crawling C Elegans, discard the neurons by projecting their activity down into a handful of dimensions, and then successfully map the worm’s different types of movement on to different regions of that low-dimensional space.
This year, Brennan and Proekt took the next step in understanding the mind of the worm: make a model of the low-dimensional dynamics of its brain. Sounds dull. But this model does two important things. For one thing, it solves the pesky variability between brains— even in C Elegans, there are big differences between animals in the patterns of activity of the same neuron during the exact same behaviour. So creating a model of the dynamics common to all the neurons means that it applies across all worms, even those not used to make the model. And the second thing: building a generative model means Brennan and Proekt could create new brain dynamics from a given starting point, then see when the model’s activity changes to a new part of the low-dimensional space, and so predict when the corresponding behaviour will happen. Even better: they can do prediction in worms not used to create the model.
Using the Zimmer lab’s data, Brennan and Proekt did just that. Indeed, even with just 15 neurons in common between recordings, they were successful: they could build a model that captured the joint activity and its changes in just two dimensions; generate from that model neural dynamics and corresponding behaviour that matched the distributions of forward, backward, and backing-up locomotion. And use that model to successfully predict when the worm would transition to moving forwards in the future, based on where the neural dynamics started off — and do so in an entirely new set of worms. To me, this paper is a glimpse of the future of neuroscience: a model of neural activity successfully predicting changes in behaviour in new animals. Is this what “understanding” means to you?
In Other News…
And 2019 brought us so much more. The journal Nature changed its entire design and layout, booting out the torturous 1500 word limit “Letters” that made up the bulk of its papers, and instead publishing all papers in its longer Article format — with the aim of improving readability. I’ll let you be the judge of whether that succeeded. Krishna Shenoy and team showed us that spike-sorting is a waste of time for understanding the activity of big groups of neurons (and also the glacial pace of publishing: the preprint was posted December 5th 2017; the paper came out July 7th 2019. Without changing a lot). Neuralink finally announced some stuff, including releasing a white paper on the technical design of neural threads and the robot that implants them, apparently written by Elon Musk himself, according to the author byline. Actually, he has form in this area.
Then there was Ed Yong’s terrifying piece on the scandal over the depression genes that aren’t. An entire edifice of scientific research is built upon the fact that variants of a handful of specific genes (like SLC6A4) alter the risk of depression. Except that they don’t. The links were established in tiny studies using a few hundred people. As soon as you use a big enough sample of people, the link between the gene variants and depression disappears. Which should have been mind-numbingly obvious: there is no way a meaningful link between the variation of a single gene and depression could be detected with a few hundred people. For all of us, it’s yet another lesson that we continue to do science badly wrong. The moral: first work out what you can and cannot detect with the tool you’re using, and only then do the science with that tool.
Regular readers may have noticed that 2019 also brought something of a hiatus to The Spike, the home of this very piece, with the last 6 months bringing mostly silence. That’s thanks to a combination of the commitments of running a lab, running a conference, life events, and a major project that will be announced next year… (hint: it rhymes with “ook”). But The Spike’s back catalogue is now something I’m actually quite proud of, with much to see and do. So if you fancy keeping your mind purring during the Christmas break, take a dive into The Spike’s A-Z Guide.
Happy Holidays everyone. And good luck for 2020. We need it.
Want more? Follow us at The Spike
Appendix — how not to compute a correlation
Remember: each data point is a patient — it represents the average length of a pyramidal neuron’s dendrites in that patient. The correlation was computed between the IQ score and these averages. That’s bad.
Because the data-points are averages, they have an error in their estimation. In their plot, each data-point has error bars giving the standard deviation of lengths in that patient. Some of them are really big. That suggests a lot of leeway for the possible “true” average values (even though computing the average is meaningless, as already discussed above).
At minimum, they should have computed the whole set of possible correlations, by including the full likely range of each patient’s average value. And I’m willing to bet that this set of possible correlations includes many that are indistinguishable from zero.
The plot contains a stark clue that correlation is not the answer. Taking a single neuron from a patient means that patient’s data-point has no error bars, has no variation. (Whereas in fact that data-point is useless as it’s one neuron out of a few billion pyramidal neurons in the superficial layers). Which means the possible correlation values will vary less by taking fewer neurons per patient. Anytime you get a more reliable correlation by measuring fewer things, you know something’s gone wrong.
The study makes a valiant effort to come up with a causal mechanism, via modelling work that shows having larger dendrites increases the speed at which spikes are made — and so could let neurons track their inputs better. Which of course assumes that is something to do with intelligence…