Neuromorphic chips: a major leap in AI

Christine
image from Wikimedia commons

How Neuromorphic chips will make AI truly intelligent

AI has made a giant step forward by the invention of “deep learning” in neural networks. AI stands for “Artificial Intelligence”, as opposed to “Human Intelligence”, suggesting AI is as intelligent as humans. Some examples indeed seem to suggest it does. AlphaGo defeated Lee Sedol. Deep Blue beat Garry Kasparov. But does beating a human at a very narrowly defined task mean a computer is smarter than a human, or even, that it is smart at all? Let’s have a closer look.

Neural networks use a way of computing that uses the human brain as a model. You put a million artificial neurons in your model, you connect them with one another, you provide each neuron with an algorithm, then you train the neural network with a ton of examples, after which the neural network not only knows the examples, it has also derived all other cases similar to the examples. It “knows” things that you haven’t actually told it. That is an important aspect of intelligence: deriving knowledge from a limited set of examples.

How it works

Now, let’s look at how a computer based neural network does that. A programmer writes software that contains a model of the neural network. The model has an algorithm — i.e. an equation — that defines the behavior of a neuron — a “brain cell”. These neurons need to run on a normal computer. For that, the equations of the neurons and their connections are rewritten into one large tensor equation. For every training example, the equation is solved, which takes a lot of processing power. A neural network can have milions of independent neurons, and all processing needs to go through one CPU, or at least through a limited number of CPUs. You can’t do this training on your laptop, let alone on your phone: it takes dozens or more computers to train the network. Then when you load it on your phone, it knows what it knows, but it doesn’t learn. That’s where the “neural” metaphore stops. Your brain has 100 billion neurons that do concurrent processing and data storage at the same time. A CPU needs to push gigabytes of data through one tiny funnel — the Von Neumann bottleneck — which takes energy, generates heat, and is slow. A CPU heats up to 70 degrees celsius, if properly cooled. Your brain doesn’t heat up significantly. A CPU takes 100 Watts of energy, while your brain only needs 12 Watts. Still, your brain is faster. This is because

  1. a brain has 100 billion “cores” working independently, while a CPU has one core through which all data needs to go
  2. every node in a brain does both processing and data storage, limiting the need for data transport, whereas a computer completely separates data storage from computing

Neural networks aren’t exactly new. They were succesfully used in commercial applications 25 years ago. They were not as good as today’s AI, because we didn’t have the fast computers and we didn’t have the more advanced models of “deep learning”. Because our software simulations weren’t so powerful, the idea came to mind to actually implement the neural network model on a chip: build 100 billion tiny neurons on one chip, provide them with a simple algorithm, connect them, and voilá! you’d have a computer as smart as a human. The silicon neurons wouldn’t have to be 64 bits processors, 4 bits probably is enough. But we didn’t build that chip, back then.

Loihi chip

The new kid on the block

Enters Intel’s Loihi project. Or IBM’s TrueNorth, which is similar. Loihi is a chip that has 128 cores, each containing one thousand neurons. The chip uses 16 bit registers, and a limited number of 32bit instructions. Cores are interconnected, and they work more or less independently. This means, the chip is fast — using 128 cores simultaneously — and does not use a lot of energy or generates heat — because data isn’t being pushed back and forth: all cores do both processing and data. Results are very promising so far. But there’s more. The chips can be linked via a dedicated fast network, hooking up 16,000 of them. One chip has 128,000 neurons, which is a lot already, the whole network has 2 billion. Which is one fiftieth of a human brain. These chips are experimental. We can expect them to become commercially available in the next few years, and probably in a form that they have close to a billion neurons on one chip. Give it another one or two years, it’ll be small enough to fit in your smartphone, which will then be actually a smart phone. Maybe even smarter than you and I.

Where is this going? Loihi and TrueNorth are just examples. I would say if you stick with 4bit tiny cores, you can cram a lot of neurons on one chip. You connect them directly, with their neighbors, and you put matrices of connecting “wires” on top, through which neurons can make and break connections at will. Like the brain. Add artificial synapses, like the ones Jeehwan Kim et al. of MIT have created in their lab. These synapses don’t behave like ordinary wires on a chip, they behave more like synapses in a brain. They provide the flexibility and independence that sets neurons in a brain apart from ordinary silicon computing.

My prediction for the near future

Today, AI can do amazing things, amazingly smart things, in a narrow area of expertise. AI can learn, if you give it a warehouse full of CPU’s or GPU’s and a sufficient amount of time. Once you are using it, it stops learning. The next generation AI however, using neuromorphic chips, will learn general knowledge, i.e. knowledge in all areas of science, art, technology, and everything. It will not need a cloud full of super fast GPU’s, it will just sit on your phone or in your car. It will have a conversation with you, it will see the world around it, it will listen, and it will learn in the process. It will guard you, it will protect you, it will help you, it will amuse you. One day, your car will say to you “you stay home, I better do this alone”. Like Iain Banks in one of his Culture novels lets a super intelligent spaceship say to its crew of humans “please get off the ship, this mission is too critical to have humans on board”.

References

Intel Newsroom announcement on Loihi

Loihi: A Neuromorphic Manycore Processor with On-Chip Learning, IEEE jan/feb 2018 (paid, $33)

Engineers design artificial synapse for “brain-on-a-chip” hardware, MIT News January 22, 2018

Intel Unveils Prototype Neuromorphic Chip for AI on the Edge, Design News

Introducing a Brain-inspired Computer, IBM Research

Loihi images courtesy of Intel

Loihi test board

Originally published at https://www.linkedin.com on April 19, 2018.

Christine

Written by

Christine

Software engineer, AI engineer, entrepreneur, writer

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade