Blueprint for a billion euros: the “Brain-on-a-Chip” Project

Neuromorphic chips, and a counterfactual history of the Human Brain Project

Mark Humphries
The Spike

--

Credit: Pixabay

If you’ve committed to spending half a billion euros on a single research project, at what point would you be prepared to admit you made a mistake and start again? When the project leader raises public support by seemingly lying to the media about what the project will do? When the entire research field rejects the project, and more than 500 senior scientists sign an open letter condemning the project’s governance? When some of the project’s own leaders question its purpose? When an independent report, commissioned by you, agrees with every single criticism leveled at the project? When the latest review of the entire idea of spending half a billion euros on any project can only find at best to say of this one that “some bits of it are going well”, implying that most of it is a waste of taxpayers’ money?

The EU-funded Human Brain Project has suffered all these fates. Yet it still lives. To its credit, the EU’s research funding arm has given the Project’s management a complete overhaul, changed the leader, and released a new vision to the world. Yet the Project remains a clumsily gaffer-taped mess, an ungainly mish-mash of different projects labelled “brain”. (Or, in some cases, nothing to do with the brain at all — “Medical Informatics”, anyone?). At least half a billion euros will be sunk into this thing by the time it ends. The goal is to spend a billion. One billion euros. Which begs the question: what else could have been done with that money?

Here I offer a blueprint for what the Project could have been. The seeds of this blueprint are already in the Project; they could have been at the forefront from the start. So consider this a counterfactual history of the Human Brain Project.

Let’s start with the Achilles heel of the Project: it doesn’t have a purpose. The Project is funded by the “Future Emerging Technologies” arm of the EU. The whole point of this funding is self-explanatory: fund projects on ground-breaking new technology, its applications, and the basic science that would enable this technology. At the core of any such project is the new technology, and everything pivots around it.

There is no new technology at the core of the Project. At the outset, the core was the epically detailed model of a few thousand neurons in the cortex of a teenage rat (no, seriously, this was worth half a billion euros, stop giggling at the back). Actually, it wasn’t worth it: after the backlash, the EU demoted this model from the core of the project to just another wing of a many-winged beast. And replaced it with: nothing.

But there is a new technology in the Project, and one around which you could easily build a credible, feasible, ten-year, billion euro project: neuromorphic chips.

Neuromorphic chips are one of two candidates for long-term solutions to the computing power problem. The other is quantum computing. What is this problem? You may have noticed that the clock-speed on the processor in your laptop hasn’t gone up much, if at all, in the last few years. We’ve reached the point where we cannot speed up the raw processing power of microchips anymore, because we cannot shrink the transistors in them much further without running into weird quantum effects, or simply uncontrollable overheating. The solution to date has been to instead make multiple processors on a single chip — multiple cores — to do processing in parallel. But for some problems we’ll always need raw computing power. And we always want processors to be more efficient. And there are many problems for which we need to increase computing power exponentially more to even start cracking them.

So the two long-term solutions: embrace quantum weirdness, or make a chip that works like the brain. Brains process huge quantities of information, quickly, using orders of magnitude less power than a processor. So the goal of neuromorphic chips is simple: make a chip which passes information between its components like a brain does between its neurons — massively parallel, efficiently, and noisily. Key here is that these chips pass information between their elements using spikes — not just on/off like transistors, but trains of pulses. This allows each element to make complex codes, to represent things by the rate of pulses, the timing of pulses, or both. Sounds weird? Indeed; but IBM, Qualcomm and others have already invested billions of dollars in these neuromorphic chips.

Neuromorphic chips are the ground-breaking new technology around which a “Brain-on-a-Chip” Project could have been built.

Here then is the blueprint, in four main projects:

1. The core platform: neuromorphic chips. The two neuromorphic chip research teams already on the Human Brain Project — SpiNNaker and BrainScaleS — are moving into creating robust, commercially-viable designs for their chips. Scaling these up and testing them would be a core focus.

Spin-off projects from this core would include other teams of chip designers exploring exotic designs; and material scientists exploring ways of integrating exotic materials — graphene, silicene, and their ilk.

2. The basic science of neural computation. These neuromorphic chips are based on our understanding of how neurons themselves do computation. That knowledge is woefully incomplete. Any new advance here could advance our design of these chips. Or even open up whole new ways of thinking about what a neuromorphic chip should be. So investing in supporting projects for the basic science of how real neurons compute would be an integral part of this blueprint Brain Project. Minimally, three separate attacks on this problem would be needed:

I/ Teams of neuroscientists working on the basic building blocks of neural computation, on the details of how single neurons are wired together and of how single neurons respond to their inputs. These would both provide strong constraints on how the neuron-like elements of a neuromorphic chip should behave.

II/ Teams of neuroscientists working on what neurons do in situ — how they react to sensory input and cause behaviour. Neurons do not work in isolation from each other or the world. They respond in large groups to the flashing movement of a cat chasing after a panicking bird (guess what my cat is doing right now); they fire in large groups to make the cat chase and the bird panic. Understanding neural computation requires that we understand how neurons interact with the world and each other.

III/ Teams of neuroscientists working on expanding our repertoire of what we consider to be “computation” by neurons. A key example here are neuromodulators — chemicals, such as dopamine, that are released in large volumes of the brain, briefly affecting hundreds or thousands of neurons at the same time. Such fluid, brief, wide-spread control has no equivalent in classical computation, and has the potential for new ways of thinking about computing.

3. The theory of neural computation. Neuromorphic chips represent a radical departure from classical chip design. Ideally we need a complete theory for how neurons compute to optimise them. But if our data are woefully incomplete, our theories of *how* neurons compute are even more threadbare. Such a blueprint Brain Project would thus need teams of theorists, focusing on establishing a proper computational theory of how neurons compute.

Two big challenges stand out here. First, neurons fire once their input crosses a threshold. And this effective threshold depends on the history of when the neuron fired in the past. This means neurons themselves are inherently nonlinear devices, whose output is not a simple sum of their inputs. Such nonlinear devices do not tend to make for simple, elegant theories.

Second, and perhaps worse: there need not be a single theory of neural computation. Different circuits in the brain have evolved to solve different problems: such as the hippocampus to represent and recall the recent past; or the superior colliculus to detect sudden movement. The list is endless. Whether there are only specialised solutions, or whether there is a way to abstract a general theory of neural computation would provide a fertile, if epic, research challenge.

4. Applying the neuromorphic chips. At the other extreme to making better neuromorphic chips is to work out how best to use them. Three immediate uses spring to mind, each capable of driving major research efforts:

I/ Run models of the brain. This is the full-circle option: as these chips are directly emulating neurons, so we can use them to simulate neural circuits, to further our understanding of the brain itself. Neuromorphic research teams have already been exploring this option.

II/ Robot control. With their neural-like operation, it would seem natural that neuromorphic chips would act as onboard processors for a wide range of robots faced with the real world. From knowing where it is, to seeing what’s out there, teams of robot designers could harness these chips to provide efficient yet robust capabilities for their robots.

III/ Artificial Intelligence. While the headline-grabbing exploits of modern AI — the arcade game-playing, the Go mastering, and the Tube mapping — are impressive, they are all based on extremely simple analogies to real neurons. And on huge computing power. One wonders what amazing feats could be achieved by turning the best minds in AI onto the potential of neuromorphic chips. For example, how much better at searching for the next move in Go would AlphaGo be if it harnessed the brain’s approach to searching?

How did we get here, with the Human Brain Project, and not to this alternative history? The two current Flagship projects (Graphene is the other one), each receiving the half-a-billion euros prize, were chosen in a competitive bidding process. People familiar with competitive bidding processes — such as the UK train system — will already be shaking their heads at this point. Six potential projects were given some start-up money and a year’s worth of time to do something, then report back. The two with the best potential were then chosen. But this was not a level playing field. The Human Brain Project had a 5 year head-start on the rest, as it was built around the existing Blue Brain Project. So it was already producing output, and many of its leaders already worked together. So nothing else stood a chance.

Tellingly, the EU have chosen their next Flagship project (Quantum Technology) without a bidding process. This has caused some outcry over such “top-down” imposing of their ideas on the scientific community. But after the debacle of the Human Brain Project, you can see why the EU decided that they would decide where their next half a billion euros would go. And why they thought it might be an idea to work through how all aspects of a half-a-billion euro project work together before starting it. Perhaps open competition on this scale cannot work in science.

If we run the tape back to 2012, and the EU took a top-down approach the first time, then maybe we would have a Neuromorphic Brain project. But the Human Brain Project still has until 2023 to run. If the EU were game, we could still rebuild it for them, wholesale.

Want more? Follow us at The Spike

Twitter: @markdhumphries

Want to know more? Try these:
For more on the Human Brain Project’s issues, read Leonid Schneider’s investigations on the sorry mess it got into before its restructuring, and on the weird FET Flagship review report.

For more on neuromorphic chips read Kelly Clancy’s wonderful New Yorker piece here.

--

--

Mark Humphries
The Spike

Theorist & neuroscientist. Writing at the intersection of neurons, data science, and AI. Author of “The Spike: An Epic Journey Through the Brain in 2.1 Seconds”