Brain-inspired machines

Tobias de Taillez
elbstack
Published in
5 min readJan 4, 2019

Or where Neural Networks came from

Let us talk about the elephant in the room: Neural Networks influence an increasing share in our everyday life, and they can be scary as hell. Facebook suggests new friends you indeed know because they uploaded some pictures with you and the face-tagging algorithm recognized you? Magic.

Did you see the Star Wars trailer for the spin-off SOLO? Deep neural networks replaced the face of Alden Ehrenreich with the young Harrison Ford, and it looks freaking awesome (no hard feelings Alden- the force is still with you).

However, how did we end up here? Let us check out the history of neural networks that now lasts already for over 60 years.

It all started around 1943 with a paper by McMulloch. He and his colleagues investigated how the brain works and designed the first artificial neural network inspired by it. It was pretty simple as you can imagine and was just a sum and threshold operator.

It took all of the available input (or incoming nerve activations), summed them up and compared them to a predefined variable — the threshold. If it was bigger, the neuron “fired” to all subsequent neurons in the simplest case with a single 1 — otherwise, it remained 0. With such small operators, you can indeed resemble mathematical operators like AND, OR, XOR and NOT.

In contrast to their physiological counterpart, artificial neural networks are structured hierarchically with a “bottom” and a “top” layer. The bottom layer is fed with the input data — like pixels in a picture — and passes the calculations to the next layer and so forth up to the last layer named “output layer.” Here we can skim the hopefully useful information on the neural networks calculation.

Source: Tobias de Taillez

But not so fast! I mean that literally.

Calculations like this do cost processing power. So while it looks simple at first, the number of calculations drastically explodes with an increasing number of neurons in the network since every neuron of a layer is connected to every neuron of the subsequent layer. The number of needed calculations per layer-to-layer interaction is therefore indeed in the order of n times m. Today’s networks consist of up to 50 layers “depth” and can be 4096 neurons “wide” per layer.

The famous AlexNet from 2012 consisted of 8 layers with roughly 65.000 neurons in total and corresponding 60.000.000 parameters that were tuned.

Moreover, this was entirely out of the scope of computing in the last century. Actually, there was quite a hype-train about neural networks in the early 90s (did hype-trains already exist?). The scientific research community flourished and developed nearly all important network architectures like neurons that redirect their activation to themselves for the next iteration — introducing some kind of memory to the network (Fukushima et al. 1983 and 1980) — or networks that learn to interact with some black-box system (Watkins et al. 1989).

However, they just did not have enough power at hand to bring neural networks to anything useful that days. And you know what? We helped them out of the misery.

I mean you and me. At least if you also hid in your room, smashed buttons all day and yelled not-so-nice-things to your fellow nerd-kid-neighbor while smash-punching him in his Pikachu-face on Kirby’s-Dreamland-Island with Mario (I am looking at you, Super Mario Smash Brothers on N64!).

Because we wanted graphics. Photorealistic graphics.

Source

We paid big time for the newest Vodoo 3dfx graphic card (16 MByte RAM and 125 MHz clock speed — what could you want more?). It was the time when every graphics card you could afford did not satisfy your gaming needs because developer advanced the tech at the same speed. Today, honestly, it does not matter that much anymore.

Someday shortly after the millennium, some guy at NVIDIA realized something essential: The necessary calculations for a pixel in your graphics output are mostly unrelated to the calculations for its neighbor pixel. Like in neural networks. Moreover, graphics cards with their thousands of small parallel processors are a blast in calculating pixels. So they threw some not-so-well-paid Ph.D. students, a few PCs with GPUs and a bottle of gin into a room (source: my imagination) and these guys emerged with the first software toolkit that enabled neural network calculations on external graphics cards! It was called Compute Unified Device Architecture and was released in 2007 (short: CUDA).

Since that day NVIDIA is a trailblazer in the fields of artificial intelligence, and every new graphics card they release slingshots us — as humanity — one step forward. The AlexNet we talked about achieved a top-5 error rate of 15.3 % in the 2012 ImageNet Large Scale Visual Recognition Challenge. That is 10% better than the second-placed algorithm!

Talking about performance: The AlexNet trained simultaneously on two GPUs for two weeks straight (3GByte RAM per GPU). Today with a not-so-modern-anymore NVIDIA Titan Xp with 12 GByte RAM you could do it in an hour top. This performance increase is just the result of 4–5 years.

What can a neural network do for you?

AlexNet had a particular purpose. Image classification. However, there is more to neural networks as classification tasks. Given enough labeled data, you basically can match/train any transfer function you could imagine. Or cannot imagine — And that is the part I find highly interesting personally.

We typically recognize a connection between data sets if they (mathematically) correlate. Better weather? → More sold beer in beer gardens. But what if the link is not so obvious? Here neural networks can help us big time by learning the transfer function. Later we could also analyze the neural network itself how and why it learned the transfer function in precisely this way, and therefore learn something about the underlying real-world-black-box we were interested in in the first place. We will come to this in a future part of this blog-post mini-series.

The next three posts will lead us deeper (pun intended!) into different types of neural networks like classification, regression or reinforcement learning and what problems occur if you play with them.

I hope you stay with me on this cruise through the rabbit hole.

--

--

elbstack
elbstack

Published in elbstack

elbstack is a software engineering & design company. We question, we advise, and we’re excited to help your next project to succeed.