Buzzword Bingo: Neuromorphic Computing

Bradley Ramsey
Supplyframe
Published in
6 min readFeb 19, 2020

Could the replacement for Moore’s Law come from chips inspired by the human brain?

YOSHIKAZU TSUNO / Staff / Getty Images

Moore’s Law is dying! Moore’s Law is dead! If quantum computing has anything to say in the matter, it’s both! While it’s true that we’re nearing a plateau in regards to transistors on a chip, it’s going to require a radical change to replace the concepts put forth by Intel’s co-founder, Gordon Moore.

The options are slim, quantum computing being one of them, but now there’s a new contender: neuromorphic computing. Chips modeled after the human brain. It may sound like science fiction, but it’s much closer to science fact that you may realize.

The Elevator Pitch

When I first heard about neuromorphic computing, I imagined a scene from a science fiction movie where a group of scientists sit around a table, clutching their unkempt hair as smoke fills the room from their lit cigarettes.

They’ve been banging their heads against the wall, trying to come up with a solution to the problem at hand. In a lot of ways, it’s a metaphor for the industry in a few years. After all, Moore’s law has an expiration date.

That’s when one of them has that light bulb moment. They leap up from their chair and race to the whiteboard. They write three words: the human brain. Countless neurons, firing in parallel, interpreting information and the environment around us faster than any modern computer.

What if we could mimic that behavior in our hardware? It’s an idea that’s been around since the 1980s, but it’s regaining steam in a world where we need our hardware to think like we do.

Von Neumann Architecture is so Last Decade

The traditional Von Neumann Architecture. Image: Kapooht [CC BY-SA (https://creativecommons.org/licenses/by-sa/3.0)]

Typical chips that we use today follow the Von Neuman Architecture, which has a separate CPU and memory. The data flows between them as the processor fetches data from memory to execute tasks.

Now, turning to the human brain, we know that it functions by forming synapses between billions of neurons that then transfer charged atoms via ion channels to help us think and process the world around us.

Neuromorphic chips take a similar approach by creating networks of cores within each chip that integrate both memory and processing into digital neurons that can communicate with other chips in a similar structure to our brains.

Neuromorphic computing offers a more compelling way to process data that goes beyond 1s and 0s. Taking inspiration once again from nature, neuromorphic chips can transmit precise amounts of ions to trigger the receiving end of a digital synapse (known as an axon) in ways that leave it adjusted or weighted in response to the inputs. Ultimately, it offers far more flexibility than a binary input, and it requires less power to execute.

You can think of this as the potential hardware side of a neural network. The software is designed to “think” like the human brain, and these chips would give it the structure to match, along with the ability for the hardware to learn and adapt in the same way neural networks have been doing for some time.

These concepts are indeed exciting, but as you can probably imagine, we’re going to need new types of materials to facilitate this kind of precise transfer of electricity between synthetic neurons.

The State of Neuromorphic Chips in 2020: We’re Getting There

A panorama of the SpiNNaker 1 million core machine (detailed below). Image: Pabogdan [CC BY-SA (https://creativecommons.org/licenses/by-sa/4.0)]

This kind of radical shift requires more than just technological breakthroughs. We’re going to need to address key obstacles in our manufacturing processes, materials, and fundamental understanding of traditional computing before we can fully embrace a change like this.

Thankfully, there are already some promising projects already in motion:

  • In 2018, a team at MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories published a design for an artificial synapse that allowed them to precisely control the flow of electricity across it (the way our brains transmit ions). The team used silicon germanium to solve the materials problem, and simulations showed that the chip could recognize samples of handwriting with 95 percent accuracy.
  • SpiNNaker is another project that made waves in mid-2018 when it conducted the largest neural network simulation to date using 80,000 neurons and 300 million synapses. The goal of this project is to make strides forward in neuroscience, robotics, and computer science.
  • Intel announced a new configuration for its neuromorphic research processor, codenamed Loihi, in mid 2019. This version allows for more than 8 million neurons and delivers 1,000x increased performance when compared to convention CPUs in applications like sparse coding, graph search, and constraint-satisfaction problems. It also offers a 10,000x energy efficiency increase over traditional CPU architectures in these types of tests.
  • Most recently, Engineers at Purdue University and at Georgia Tech used a newly discovered ferroelectric semiconductor, alpha indium selenide, in two separate applications, one of which is applicable to future neuromorphic chips.
  • Many current designs use memristors, which store data as resistance, as the basis for their neural synapses. Ferroelectric tunnel junctions are a better solution, due to their ability to be made into dense memory configurations, but these act as insulators and carry too little current when scaled down. The introduction of this new ferroelectric semiconductor could solve the problem.

Predict The Future? I’m Sorry Dave, I‘m Afraid I Can’t do That

Now we reach the part where I tell you if this buzzword is worth more than a passing glance. If we’re talking about sheer market value, predictions are showing that the market could reach $8.9 billion by 2025.

This prediction is fueled by the rise of artificial intelligence, machine learning, neural networks, predictive analytics, and of course, the end of Moore’s Law demanding a new type of computing.

If we turn things back to an engineering perspective, it’s clear that we need better hardware for our neural networks. Right now, we’re running them on Graphics Processing Units (GPUs) which are typically used for games and 3D software, but also have the ability to do parallel processing and matrix multiplication better than traditional CPUs.

The problem is that these GPUs don’t have neurons in their hardware structure, so they’re mimicking all of this on a software level. This causes issues, not the least of which is massive power consumption.

Let’s assume that neuromorphic hardware is the solution. To fully embrace it, we would need to alter the fundamental ways we think about computing, a topic covered in this paper by authors Jack D. Kendall and Suhas Kumar.

An analysis of the paper reveals ten different concepts we would need to address. A bulk of the takeaway here is that neuromorphic computers would need to embrace analog, not digital. Furthermore, they would need to work in a non-linear fashion, and store memory in the same neurons that perform the action (something the aforementioned memristors and ferroelectric tunnel junctions could accomplish).

Ultimately, the paper concludes that the future of computing lies within rethinking our processor architecture instead of trying to fit more components on a chip. On that we can certainly agree.

--

--

Bradley Ramsey
Supplyframe

Technical Writer at Supplyframe. Lover of dogs and all things electronic.