How will we crack the brain?
If you would like see things straight out of a science fiction movie, you should visit a neuroscience laboratory. Technology and science has advanced so quickly that I am not sure the public understands how advanced we are. Depending on the species, creating new transgenic animals — where you slip new genetic material into an organism — starts at ‘pathetically easy.’ During my PhD, there were days I would create the DNA for five or ten new transgenics in one go; creating the animals themselves was hardly a challenge. Light can be used as a physical force to move things around (“optical tweezers”). Scientists routinely create custom-made viruses to go forth into a chosen animal and label a precise set of neural cells. We can rain light down onto an animal to replay — or delete — memories. The recent creation of the CRISPR system allows genetic engineering to occur at unprecedented levels.
And the technology is advancing — fast. Things that seemed impossible five years ago are being commercialized right now. But for all that the brain is still largely a black box that we can prod and poke without understanding what it is actually doing, or how it got there. It begs us to ask the question: what are the directions neuroscience is heading to make a sense of this neural hydra?
The specter of past knowledge
Although no one seems to realize it now, the idea that the center of cognition and thought lies in the brain is a relatively new one. For most of human history, it was the heart that people would point to as the locus of their soul. As Carl Zimmer expertly narrates in his book The Soul Made Flesh, it was only in the 1600s, through an intense scientific effort, that the brain was identified as controlling the body. To think how many millenia of human history it took for us to realize that!
In a sense, neuroscience is both young and old. It was all the way back in 1780 that Galvani discovered that electricity caused muscles to twitch, though discovery of the chemical that allowed animals to control this process (acetylcholine) had to wait until the 1920s. But the most common neurotransmitter, glutamate, had to wait even longer. Used in perhaps 90% of neurons in the cortex, it was not nailed down as a transmitter until the 1970s. Although neuroscience has a long history, what we now take as its foundational building blocks are still stunningly new.
Because of this, the “future of neuroscience” has always been filled many “unknown unknowns”. In 1966, for instance, Marvin Minsky asked his students to attempt a simple summer project: “[construct] a significant part of the visual system.” Looking at the syllabus, it is easy to imagine how Minsky imagined he could take one simple step after another and create a skeleton of a visual system. At the time each step seemed so small — but with the benefit of hindsight each step seems more like a series of marathons. We thought it would be easy because we are intuitively visual creatures. It is so easy for us, why would it be complicated? But there is a reason that the brain spends 30% of its computational real estate in cortex on vision.
A similar attempt to guide the future came from David Marr, who wrote the groundbreaking book Vision. Marr has become something of a legend among neuroscientists: the brilliant mind that died of leukemia at the young age of 35. It would be curious to imagine the world in which Marr had survived — how far could he have gotten? — but he left behind him the skeleton of a research program that continues to be used to this day.
Nearly every question of how to advance neuroscience begins with Marr’s “levels of analysis”. The first level is computational: what is the system doing? The next is algorithmic: how does it do it? The final is implementational: what is its physical reality?
Although these questions are crystal clear, neuroscience is a particularly diverse revealing a particularly diverse set of questions. You have cognitive scientists studying how we think, you have molecular biologists studying how tiny molecules affect seemingly miniscule changes in plasticity and development, you have engineers asking how to get electrodes to decode brain activity in order to create Ghost In The Shell-style prostheses, and you have so much more. Each is studying a particular piece of a large puzzle. How are they going to do that?
The state of current knowledge
The past two decades has seen an explosion in tools that can dissect and record signals in the brain. Diverse sets of molecules that allow investigation of tens to hundreds of neurons simultaneously has drastically improved our spatial knowledge of the brain. Light-activated ion channels combined with genetics have allowed us to precisely label and manipulate specific types of neurons. What was once a field devoted to such physics-era concepts of electrodes and membrane voltages is slowly moving in the direction of molecular biology, with signaling cascades and custom-made viruses being the tools of the day.
What we would like to understand, though, is what are the tools of tomorrow? Where is neuroscience heading? The Future of the Brain, edited by Gary Marcus and Jeremy Freeman, collects essays from a series of neuroscientists as to the direction research is moving. Importantly for a field as variegated as neuroscience, every essay has a distinct take on what is the important direction in which to move. But several themes emerge.
The first is the need to pick out the tangled strings of the brain and map the thing. Although we have some broad idea of the anatomy, new connections between major regions are still being discovered. But the need goes deeper than that: within each area of the brain, different neurons do different things and the precise set of connections matters. Like a symphony where instruments can come together to create crescendos and lulls, each neuron is an instrument primed to do something in concert with the other instruments. But what and in which combinations we only barely know — and can only barely distinguish the brasses from the woodwinds.
But in order to get the whole set of connections, new ideas and technologies are needed. Previous techniques that have mapped the entire connectome of the nematode nervous system are too unwieldy even for a brain as tiny as a common mouse. What if we could put a barcode scanner up to each neuron and ask, where are you from? What have you seen and experienced? One possibility suggested by Tony Zador and George Church is to add a unique genetic barcode to neurons in order to identify connections and activity. Zador and Church are currently working on customized viruses sent in to cells to do our bidding and build up these unique barcodes.
Without untangling the mess of connectivity, we will never have the blueprints of the brain. Yet as the nematode has shown, simply having the blueprints does nothing if we cannot understand what is going on at each stop on our map. Conventional techniques can perhaps record tens or (at best) roughly a hundred neurons at a time — an insignificant number compared to the entirety of the nervous system. A rounding error.
Big data and simulation (Algorithms)
If that is such a problem, why not just record activity from every neuron in the brain? Although this seems laughably optimistic, Ahrens and Freeman have already accomplished the feat. Improvements are needed: the temporal resolution is still too slow to detect the discrete neural spikes that the brain signals with. But the sheer fact that it has been done once is leading others to try their hand at it while improving the technology. Amusingly, imaging the entire brain is often accompanied by Fish Virtual Reality. Perhaps soon philosophical questions about brains and vats will no longer be so philosophical.
The largest problem with imaging the whole brain — or even large slices of it — is what to do with all of the data. Even the smaller brain of the zebrafish contains hundreds of thousands of neurons with their fractal combinations of connections changing with time and experience. How do we make sense of it?
If you are at all conscious these days, you know that the buzzword of the moment is big data. Freeman points out that we are reaching the point where an individual experiment may generate 100–200 terabytes of data — roughly the same amount that Facebook or Twitter generates in any given day. This is a problem both for individual scientists and the neuroscientific community as a whole.
Two approaches are needed: organization and integration. Sean Hill is part of the Human Brain Project (HBP) that is attempting to systematize the bewildering notational systems used in each laboratory. By combining data collected in different laboratories, the HBP hopes to link together previously unconnected data into one coherent whole. But the push for integration goes further: data is meaningless without context, and one way to integrate all the data is to just put into a giant model and see what happens. Are these incomprehensibly dry measurements enough to explain — the movement we see? the decisions we make? the pulsing electrical undulations that we see driving through the brain? The HBP hopes to connect this data and see if we can make further predictions based on what we know — and on what seems missing.
While the HBP is focused on simulating the brain starting at the level of individual synapses and moving up, another approach is taken by Chris Eliasmith. His program Spaun is less concerned with the nitty gritty details than with whether broad modeling of different regions can combine to produce more complex behaviors. Olaf Sporns suggests a slightly reduced approach of modeling broad networks, rather more like a ‘weather forecast’. These forecasts could then be slightly modified in order to understand where pathologies arise from. Perhaps eventually, these forecasts could be personalized to generate medical diagnoses from simulated brains.
These three approaches — complicated biophysical replication, network modeling, and whole brain modeling — are an entry into the contentious question of the appropriate level of understanding. Where should we be working? These models have a tendency to focus entirely on neurons and ignore pieces of the puzzle like the more mysterious glia (which are perhaps 80%+ of the cells in the human brain), the extracellular matrix that structures the neurons, and the vasculature that feeds them. Are these important? How reducible is the brain? Can we model tiny structures or must we model the whole thing in order to understand it? These are questions that remain to be answered, yet the fact that we can so glibly discuss simulating whole brains should tell you where the science is heading.
Putting it all in context (Computation)
Even more important than simply setting these wind-up models in motion is understanding why they work, and why they do what they do. When faced with the multitudinous combinations of neural firing, how does one make sense of them? One popular approach has been to take notes from statistical physics which can accurately predict properties of large numbers of molecules in motion. Here, Krishna Shenoy suggests that neuroscientists should instead look at neural activity as tracing a geometry: the activity of one neuron goes up, another goes down, another stays the same, but the path their dynamic motion traces is like a rollercoaster. It whirs this way and that, but stays on certain tracks. Understanding where these tracks lead and how they move will push us into understanding what the brain is computing.
Perhaps the most important essays in this book are those from the skeptics. It is easy and fun to speculate about what grand powers we will have in the future. It is hard and important to think rigorously about the challenges that lie in our way — so that we can overcome them.
If one issue arises again and again in these skeptical chapters, it is nervousness over being able to translate data to explanation. Say we have all this data, integrated and running in models — what will it tell us? Nothing without a theory explaining it. Yet theorists are in surprisingly short supply. We have been rushing forward so quickly in acquiring the data, we have lagged behind in explaining the data in a consistent manner.
Neuroscience must also grapple with the fact that we cannot look at the brain outside the context of the animal. Thomas Nagel famously asked what it was like to be a bat. Could we ever understand something that sprays sonar from its mouth in order to “see”? Leah Krubitzer asks whether we can understand a nervous system without understanding the rest of the animal. A duck-billed platypus can use its bill to sense electrical signals, a property which is foundational for many of its social behaviors in the water. So how could we possibly understand the brain of the platypus without understanding its duck-bill? Ironically, this means that a brain-centric approach to the brain could end up being misleading. Like a precautionary note from Darwin, the duck-billed platypus reminds us that the brain is a product of its peculiar evolutionary history. This should make us seriously ask how many ways there are to ‘make’ a brain.
Where will this take us?
The benefits of all this work are innumerable. Already, prosthetics have made impressive advances over the past decades. What will happen once we are able to freely decode the activity of the brain? How long until robotic limbs that can be controlled directly by the brain — and send feelings back to it — are a common consumer item? Or any kind of mind-controlled device, wirelessly sending data to your smart phone? And how about all sorts of diseases of the nervous system? Psychiatry today has the same problems as neuroscience — thinking of the brain as one big mass. When molecules and connections can be tweaked with specificity, more precise drugs without side-effects will be in the offing.
In a speculative chapter, Marcus and Koch suggest that this future will require mind-reading nanobots and require us to confront new ethical quandaries. What happens when you shut down a simulated brain? How about shutting down a simulation of part of a brain?
How do we get to that future?
With 30,000 neuroscientists descending on the large Society for Neuroscience conference every year — and those being a minority of the total number of scientists in the field — it is unsurprising that there should be a diversity of opinions as to where the field is heading. But the number of levels that neuroscience operates on necessitates such diverse opinions. Neuroscience is overwhelmingly intellectually diverse, subsuming psychologists and molecular biologists and ecologists and physicists and more. And each has its own research plan for the future, like a quantum multiverse of ever-expanding possibilities.
The danger facing neuroscience is restricting these possibilities by focusing on old ideas or only a few big projects. It was recently revealed that there were probably fewer than 10 people who got their first major research grant under the age of 31, while the average age is in the 40s. Multi-year postdocs that can stretch almost a decade are preventing those with exciting new ideas from coming forward, and hiring is overly focused on those coming from famous labs.
It is clear from this book that the only path to the future is to let a thousand ideas bloom. After all, if ten other scientists had been asked about the future of the brain, there could have been ten other ideas. Neuroscience is exploding outward in a multitude of directions — and it might still be too few for such a complex organ with such a complex set of functions. Cognitive, economic, ecological, molecular: there is no one future of the brain, there are thousands. These threads of inquiry are all fundamental to understanding this thinking organ, and will need to be slowly stitched together one theory at a time.