Neuromorphic computing: The tech behind the hype

Daniel Justus
Digital Catapult

--

Neuromorphic computing is a buzzword that is currently all over the place in artificial intelligence and machine learning [1,2]. To explore what is behind this hype, Digital Catapult invited members of the SpiNNaker (Spiking Neural Network Architecture) project [3] from the University of Manchester to give a workshop as part of its Machine Intelligence Garage programme. The workshop enabled 11 startups to join an intensive and closed group to learn about the technology, its applications, prospects, and challenges in a hands-on course. This blog post summarises the insights from the workshop and provides an introduction into neuromorphic computing and its applications.

What is neuromorphic computing?

Neuromorphic computing is inspired by the function of the human brain that, currently, still outperforms machines at all tasks requiring creativity or the transfer of knowledge to other problems. Moreover, the human brain is incredibly energy efficient using only ~20 watts [4]. The main computational unit in the brain is the neuron, with approximately 100bn neurons in an adult human brain [5]. These neurons are connected by over 150,000km of nerve fibers and 150tn synapses [6], making the brain architecture massively parallel. As a comparison: As of November 2017, the world’s fastest supercomputer has 10,649,600 processor cores and uses 15,371kW of energy [7]. The computations that can be performed by a single CPU core cannot be directly compared to those performed by a single neuron. However, especially for applications in AI, the human brain can serve as blueprint for an extremely capable and efficient computing system.

Figure 1: The architecture of a SpiNNaker chip (a), a single chip on a 48 node board (b), and a scheme of SpiNNaker’s connectivity (c).

To mimic the function of the human brain, neuromorphic computing uses architectures that are fundamentally different from conventional computer hardware. Massive parallelism of low powered chips and novel ways to engineer their communication are the central elements of all neuromorphic computers. SpiNNaker is a long-term project of researchers at the University of Manchester, aiming at building a neuromorphic computer to simulate brain function. The heart of the SpiNNaker system is a chip with 18 low powered ARM processor cores, local memory and the SpiNNaker router that handles the communication with other chips (figure 1). This communication happens asynchronously (a chip sends a signal to another chip, but does not require an answer) and in parallel (all chips communicate with each other simultaneously). These characteristics are inspired by the similar communication of neurons in the brain [8]. They allow a degree of parallelism that cannot be achieved by conventional hardware architectures. The SpiNNaker machine that is currently in use at the University of Manchester consists of 600 SpiNNaker boards with 48 chips or 864 processor cores each, totalling to 518,400 processors (figure 2).

Figure 2: The 518,400 core SpiNNaker machine at Manchester.

How to use SpiNNaker?

During the workshop with SpiNNaker, the attendees had the chance to learn how to use their neuromorphic hardware hands on. Using the python programming language, the python package PyNN [9] for modelling neural networks, and the sPyNNaker interface to the SpiNNaker boards, it is easy to design networks of spiking neurons and deploy them for simulation on SpiNNaker hardware. This can be done either using local SpiNNaker boards, or by remotely accessing the SpiNNaker machine in Manchester though the Human Brain Project [10].

Where can neuromorphic computing be applied?

The main purpose of the SpiNNaker machine lies in neuroscientific research. Currently it is capable of simulating networks of up to one billion neurons (approximately 1% of the human brain) in real time. However, other applications are being explored as well. Constraint satisfaction problems (CSPs) are a class of computational problems that can be solved by the SpiNNaker architecture efficiently with a methodologically similar approach to that of a human [11]. CSPs are defined by a set of variables that need to satisfy certain constraints. Simple examples are finding the solution to a sudoku (figure 3a) or colouring a map with four colours without adjacent countries having the same colour (figure 3b, see the four colour theorem [12]). These problems are subject to intense research in the AI field since they are at the core of many different applications. They can be solved on the SpiNNaker architecture by simulating spiking neural networks, where the variable states are represented by the activity of sets of neurons and the constraints are enforced by inhibitory connections that suppress the activity of their target neurons. Additionally, use cases in reinforcement learning, visual and auditory perception, robotics (figure 3c) and even non-neural simulations like Markov chain Monte Carlo methods are subject to research by the SpiNNaker team.

Figure 3: Applications of SpiNNaker. (a) The architecture used to solve sudokus with spiking neural networks. Figure adapted from [11]. (b) A four-colouring of the world map. (c) A robot controlled by a spiking neural network.

Despite these achievements, neuromorphic computing is still in its infancy. The brain and the alterations it undergoes during learning are not yet sufficiently well understood to build a computer chip that can fully emulate its function. But interesting research is ongoing in this field that has the potential to change the future of computing. Machine Intelligence Garage is closely observing the latest developments and will keep you updated on any exciting news from the field of neuromorphic computing, including further hands-on opportunities.

Machine Intelligence Garage [13] is a programme delivered by Digital Catapult to help UK based startups in the machine learning field overcome one of the largest barriers they face: the access to computation power. This is accompanied by the expertise on a wide range of hardware resources and well-founded support with the choice of the most suitable hardware and its best utilisation.

References

[1] http://www.wired.co.uk/article/ai-neuromorphic-chips-brains

[2] https://www.nature.com/articles/d41586-018-01290-0

[3] http://apt.cs.manchester.ac.uk/projects/SpiNNaker/

[4] https://hypertextbook.com/facts/2001/JacquelineLing.shtml

[5] Suzana Herculano-Houzel, The Human Brain in Numbers: A Linearly Scaled-up Primate Brain, Frontiers in Human Neuroscience, 2009

[6] David A. Drachman, Do we have brain to spare?, Neurology, 2005

[7] https://www.top500.org/lists/2017/11/

[8] http://www.scholarpedia.org/article/Nervous_system#Function

[9] http://neuralensemble.org/PyNN/

[10] https://www.humanbrainproject.eu/en/

[11] Gabriel A. Fonseca Guerra, Steve B. Furber, Using Stochastic Spiking Neural Networks on SpiNNaker to Solve Constraint Satisfaction Problems, Frontiers in Neuroscience, 2017

[12] https://en.wikipedia.org/wiki/Four_color_theorem

[13] https://www.migarage.ai/

Originally published at www.migarage.ai on July 9, 2018.

--

--

Daniel Justus
Digital Catapult

Machine learning researcher and data scientist. Working on GNNs, Knowledge Graphs and language models at Graphcore.ai