Why Model the Brain?

An Actual Scientific Case for Brain Simulations

Mark Humphries
The Spike
11 min readJul 30, 2019

--

Jaffa cakes. Important for simulating the brain.

Brains are messily complex bags of cells. Cobbled together by evolution over half a billion years or more, the bauplan of the mammalian brain is a bolted together mish-mash of bits that work in tandem to keep you alive long enough to procreate.

Say you wanted to understand how one of those bits of the brain work, understand what it is they do. Then you have to chose where you want to sit in the pecking order of neuroscience. You can do experiments on the bit of the brain. You can write down theories in maths for what that bit of brain will do, whose predictions you can then compare to experiment (if you can be bothered to dirty your beautiful theory with data). Or you can simulate that bit of the brain in code. The triumvirate: experiment, theory, and simulation. And as the good book says, the least of these is simulation. At least, according to the lack of simulation studies in the glam-mag journals, the lack of support from funders, and the lack of invited talks at neuroscience conferences.

So it was heartening to see a piece in Neuron recently with the grand title “The Scientific Case for Brain Simulations”. A rare opportunity in a high profile venue to make a cast iron case for why one should build models of bits of the brain.

And it failed. Conspicuously absent from this “Scientific Case” was Science. Indeed the piece spent much of its generous 10 pages on discussing technical issues with simulators — the code used to build a model — and not discussing why one should want to build such a model. This is like titling a controversial editorial “Why a Jaffa Cake Is a Biscuit”, then spending most of your word limit writing about the wrapper.

Which is a shame, as the authors were spot on in their intentions. Building simulations of brain circuits needs a robust defence. Over the past decade the most high-profile work on brain simulations has come from the Blue Brain Project and its red-rag-to-a-bull named successor, the Human Brain Project. Both have been heavily, deeply criticised (most recently, in The Atlantic). Including by me, and I build these kinds of model. They have absorbed fantastic amounts of cash, to feed the towering ambition of building a one-to-one scale model of 30,000 neurons in the hindpaw region of a teenage rat. Such follies give the impression that building a brain simulation is tantamount to yelling “WE DON’T HAVE A THEORY” in the face of passing scientists.

We can do better than this. So I’ll try. Here’s a slightly-better scientific case for brain simulations. With biscuits.

Let’s start with The Biological Imitation Game (copyright Koch & Buice). The Neuron piece got this part spot on. Building detailed models of a brain region will give you detailed facsimiles of neural activity — spikes from neurons, local field potentials, perhaps more. The better you can capture and recapitulate neural activity from experiments, the more closely your model imitates reality. Fooling an expert, so they cannot tell which is neural activity from your model and which is from the real thing, wins you the Biological Imitation Game.

(Slight flaw here is we can always cheat. For one thing, we could train an arbitrary deep neural network to generate as output spike trains that look like cortical neurons. Indistinguishable, but sod all to do with the brain. Indeed, the powerful data analysis methods LFADS (latent factor analysis via dynamical systems) is essentially this idea — training a recurrent neural network to reproduce single neuron activity — used instead as tool for understanding, by uncovering what network dynamics could underpin the observed data).

Doing well at the Biological Imitation Game opens the door to doing science. A good brain simulation is a virtual preparation. One in which we can do virtual experiments. Experiments at impossibly high rates, or that are just impossible in animals. We can remove any neuron we like. We can alter any property of a neuron we like. We can test Llinas Law, swapping cortical neurons for thalamic neurons, and seeing if it makes a difference. We can simulate how different optogenetic protocols will change the dynamics of neural activity. We can simulate drugs that block or stimulate receptors, and their effects on neural activity. All these virtual experiments make real predictions.

But these are all biophysics. They are the mechanics of what happens to property X of neurons when we change property Y. The scope of brain simulations are much richer. Fourteen times richer.

In “Why Model?”, Joshua Epstein gives fourteen reasons for building computer models beyond “predict”. For he, as I, discount “predict” as a primary driver. Let’s dive into a few.

(Well, he actually gave sixteen, but Epstein is writing as an agent-based modeller of society. So his reasons include “Discipline the policy dialogue”. I know we all love brains, but frankly studying the salt sensitive neurons in C Elegans, or whether neurons in the prefrontal cortex of a rat use a population code, isn’t much use for the policy issues that beset society. While in principle brain simulations could one day inform how we intervene in, drug, or treat neurological disorders, we’re a long way off. And the other irrelevant reason? “Offer crisis options in near real-time”. Again, not relevant. At least, not until Elon Musk accidentally creates Skynet out of his Neuralinked clientele, in a cruel irony.)

One reason for building models is to understand, not predict. We can derive predictions from our models after we’ve used them for more enlightened science. I’d wager Hodgkin and Huxley did not build their model of action potential generation in the squid giant axon to make predictions. They built it to understand how the essential building blocks they and others had discovered experimentally — the dependence on sodium and potassium, the shape of the voltage change, and so on — fit together to make a spike. With that understanding in place, we’ve derived endless predictions from that model since then, and new insights into how neurons work.

Related to understand is explain, especially unintuitive phenomena or complex mechanisms. Brain simulations are ideally suited to this task. For example, we showed that deep brain stimulation for Parkinson’s disease could work because of the stochastic nature of wiring between neurons in the basal ganglia. By paying attention to the details of the basal ganglia‘s circuit, we could show that deep brain stimulation in the subthalamic nucleus causes a complex mix of responses in downstream nuclei thanks to the randomness of wiring alone. And some of those responses “reset” neurons to their pre-Parkinson’s state. Equally compelling theories of how deep brain stimulation works have also been worked out in detailed circuit simulations. (See also Jonathan Rubin’s exposition for why biology matters when using models to understand the basal ganglia).

Closely related to explain is test. Test ideas and theories. Say I wanted to claim that neurons in region X of cortex repeated precise patterns of spikes. That a neuron produced a particular pattern of spikes far more often than chance, according to experimental data. And this pattern is somehow its code. Well, then we can turn to a circuit model and ask: under what conditions would this model predict that such a set of repeated patterns appear? And the answer will be: hardly ever. For it would require that a large, recurrently wired collection of neurons repeats exactly the same state of its dynamics multiple times. This is vanishingly unlikely. So a circuit model here says that to claim cortex encodes by precisely repeated patterns of spikes is unlikely to be a correct hypothesis.

More generally, we might ask: can the dynamics necessary to encode information by medium X — spike timing, counts, latencies etc — be supported in a simulated circuit that respects the details of cortex? With NMDA receptors; or proper anatomy — layers, and connection probabilities, and more. With adaptation in the spiking of pyramidal neurons; or those pesky interneurons that sadly are not just a pyramidal neuron with a minus sign in front?

I can find few examples of this kind of test-theory-against-the-details applied to the cortex (notable exceptions include the work emanating from Wolfgang Maass’ group e.g. this; and a recent paper from Duarte and Morrison on the computational capacity of circuit models of layers 2 and 3 of cortex). Outside of cortex, this kind of work has been done to death. The hypothesis that the basal ganglia implement some kind of “selection” device has been mercilessly tested against the reality of the basal ganglia’s biophysics, and replicated by multiple groups. And I’ve lost count of the number of models purporting to explain how the wiring and dynamics of the hippocampus and friends can create grid cells in entorhinal cortex. But that’s great — we have this wealth of models, falling into different classes of mechanisms, that each can make different predictions. Science, great science, with brain simulations.

Well done, you’ve made it half way through this essay. Have a biscuit.

Neuroscience has deep reasons for building brain simulations that go beyond Epstein’s list.

One is to synthesise. On his quiet days, when he wasn’t spouting about simulating a human brain in ten years time or using one-to-one scale models of the rodent cortex to tackle autism, this is what Henry Markram claimed as the main rationale for the Blue Brain Project. And rightly so. Each experimental study of a bit of brain, as tough as it is to do, is one data-point. In one aspect of what that bit of brain is or does. Brain simulations are how we bring all those data-points together. This synthesis lets us do many things.

We can find out the known unknowns — what we need to know, but don’t. Better, we can test the compatibility of all the data-points on one aspect — like the measurements of individual ion channels in a Purkinje cell or Martinotti cell. When we strap them together into a single model, it becomes immediately apparent if something goes awry.

A sparkling case study of model synthesis is Eve Marder’s work with Astrid Prinz and others on solving the conundrum of lobster stomach dynamics. They work on a small circuit of neurons that controls movement of the lobster (and crab) stomach, a circuit that has yielded many fundamental insights into how neurons work together. The conundrum is that this circuit has a characteristic oscillation in all animals; yet between animals the neurons vary wildly in their expression of ion channels and in the strengths of the connections between them. To address this problem, Prinz built millions of detailed circuit models, each one a different lobster: each model sampled from the enormous space of known variations in ion channels and connections. And the solution to the conundrum turned out to be simple. Most of their zoo of model lobsters had the characteristic oscillation. The simulated models were showing us that the biological circuit controlling the lobster stomach is degenerate, can achieve the same end-goal — the oscillation — by many different solutions. A beautiful use of brain simulations to show the beautiful robustness of biology. And to show the laziness of evolution, which evidently can’t be bothered to evolve a tight genetic specification for the lobster stomach circuit, knowing it can just throw together any old thing and it will probably turn out okay in the end.

Even better, we can test the compatibility of different data speaking to the same theory. One small example is our test of the canonical theory for how reinforcement learning operates in the brain: dopamine released in response to unexpected reward controls changes to the strength of connections from cortex to the striatum, which thereby alters the probability of choosing the reward-eliciting action in the future (because the striatum is the gatekeeper for the basal ganglia, which we’ve already learnt are something to do with the selection of actions). The truth of that theory rests on data drawn from many different areas of research being compatible: the in vivo data supporting the reward prediction error theory of dopamine; the in vitro data on the complex rules of spike-time dependent changes at the cortico-striatal synapse; the confusing mass of data on how dopamine gates plasticity at a single synapse; and the evidence that striatum controls action selection. So we made a model to incorporate all that data and ask if they do fit together. Which they did. Which was nice.

The scientific case for brain simulations revolves around a series of simple ideas. Brain simulations can act as virtual bits of brain, to do amazing, impossible virtual experiments. But are so much more: the basis of understanding; explanations of how and why; tests of ideas and hypotheses and theories; synthesisers of data; and the way to test the coherence of entire research fields.

Some may find the above arguments compelling. Some may not. Even when made well, and I don’t flatter myself that this particular case was made well, the scientific case for brain simulations is often misunderstood or dismissed. Perhaps you’re rolling your eyes right now. And that’s a fair reaction. The eternal problem with making the case for brain simulations is that the people who get the most scientific insight from them are those who build them.

I have learnt incalculable amounts about the brain and its workings from the process of building circuit models of its different bits. Far more than I have learnt from reading papers.

For the process reveals hidden things — innocuous-looking assumptions with major ramifications, the unexpected power of a small biological detail, mechanisms unconsidered. Because building a brain simulation forces you to face up to the brain in its terrifying complexity and think hard. Which all means much of the scientific case for brain simulations comes in the making, not the running; the creating, not the numerics. And these insights are so tough to convey in a scientific paper. For they do not fit into the neat Methods/Results/Discussion triad of the classic paper. Little wonder that many brain simulation papers will contain somewhere “the conceptual contribution of this work is…”. That’s a computational scientist’s plea, saying: “the main scientific insight from our model is not in the Results. It is the model itself, it is what we found on the way; the model is the idea — the simulations are demonstrations of that idea. Please, read with that in mind.”

And that’s perhaps exactly why we still, endlessly, repeatedly, have to make the scientific case for brain simulations. Because we modellers often fail to do so in the papers we write. We perhaps need to work harder to explain what we have done, and what we have learnt, and what we are contributing. But it’s tough. If you think reading modelling papers is tough, try writing one. So the next time you take a deep breath and plunge bravely into a paper about a model that simulates a bit of brain, give a little thought to the poor souls who wrote it, and dig a little deeper. The science is there, somewhere — it just might not be where you were expecting.

Want more? Follow us at The Spike

Twitter: @markdhumphries

[Edited 31/7/19 to make clear what LFADS is and does]

--

--

Mark Humphries
The Spike

Theorist & neuroscientist. Writing at the intersection of neurons, data science, and AI. Author of “The Spike: An Epic Journey Through the Brain in 2.1 Seconds”