Serkan Piantino speaking at TOA Berlin 2016

How artificial neural networks copy the brain so AI can think faster than you

TOA.life Editorial
TOA.life
Published in
6 min readFeb 8, 2017

--

  • Former co-lead of AI at Facebook, Serkan Piantino, explains how neural networks replicate our brain’s basic functions.
  • “Behind the scenes of Facebook or Google is AI that’s predicting whether you’re going to click on an ad or not.”
  • We all kind-of know that AI thinks like us — but how, exactly?

One of the most captivating keynote talks at TOA 2016 was given by Serkan Piantino, former co-lead of AI at Facebook, and founder of deep-learning hardware company Top 1 Networks.

Serkan wanted to flesh out the idea of what “AI” is, and what it does. So he took us from the very beginning (what is a neuron and how do we make artificial ones?) to the cutting edge (how neural networks can be used to identify individual objects in images.)

It was a great talk, with the room hanging on every word — so we decided to break his talk into two parts and share it even wider.

In this post you’ll be able to get up to speed on how neural networks work and what their potential is; and then in part two — coming soon — Serkan describes what they can do. And the power is incredible: artificial neural networks have composed a “new” Shakespeare text, and help blind people “see” Facebook.

It’s amazing stuff, that might just re-wire your brain. And when you’ve read this, dive into part two and wrap your head around a world where computers are better at your job than you…

Hey, Serkan — it seems like we can do so much more now with AI now than just a few years ago. Why?

As Serkan explains, it’s a combination of big leaps that made the huger leaps happen.

“Today, we have powerful models that are powered by advances in neural networks. These days there’s a lot more data that represents how humans understand the world.

“Because we’ve evolved the software, we also have faster training, and we have really fancy hardware that let us train a humongous amount of data in a relatively short amount of time.

“These things mean there are many things we can do now that we could not do five years ago.”

Behind the scenes of Facebook or Google is a “function” that’s predicting whether you’re going to click on an ad or not.

Great! So: neural networks — can we start right from the beginning please?

OK, sure. But first, you need to know what a “function” is. Serkan can describe it best:

“Functions from data” seems very abstract; but really, functions can be anything in the real world. You interact everyday with services like Facebook or Google — and behind the scenes there is a “function” that’s predicting whether you’re going to click on an ad or a piece of newsfeed content or not.

“That’s a function: where the “inputs” are all the things it knows about you, and the “output” is the probability that you’ll click on that piece of content.

OK, if that’s a function, tell me more about those “inputs and outputs”…

Serkan again: “Let’s say we have a function that we want to compute — and we have no idea what what this function might look like. But what we do have is a bunch of examples of inputs and outputs to that function.

“And so what we want to do is learn some approximation that might fit the data, so that we can approximate the function.”

An animation from Serkan’s presentation at TOA 2016

Here’s a visual example of this process: the curvy line is a function that we want to learn. The moving line is increasingly modifying itself to better approximate the function.

Serkan continues: “It jiggles around a little bit and it takes some time, but eventually it goes from being a really poor approximation of the function to being actually a quite good approximation.

“So you really can think of anything where you have to make a decision: inputs come in, and outputs go out as a function.”

This is machine learning. And it can be applied to human problems — and more.

Hmmm. Inputs, outputs and functions make sense… is that what a neural network is?

Kind of. Maybe it’s time to talk about how brains work… hey, come back! It’s actually not that complicated!

Serkan: “Figure (A), below, shows a neuron in the brain. A typical brain will have about 100 billion of these. They have things called dendrites which can be stimulated and receive signals.

“If the stimulation on that neuron exceeds some threshold value, then an electrical impulse travels down the axon to the end of a neuron: that’s what we call a neuron “firing”.

A slide from Serkan’s presentation at TOA 2016

“And so in Figure (B) we have the artificial version of this that we use in a computer — and it has the same structure! It has inputs (x1, x2, and x3 — which you can think of as the stimulation on the dendrites), and if all of the inputs to the artificial neuron exceed a threshold, then an output comes out (y).”

The “Big Sur” servers that we developed at Facebook for these neural networks is roughly a thousand times faster than Deep Blue

So it’s just like a real brain! What happens when you connect neurons to neurons, and the output of those neurons to other neurons?

Now we’re talking. Serkan?

“Artificial neurons like this have been explored for a long time and it’s only recently that we’re able to make use of them and train them to do interesting things. There’s a lot of neurons in the human brain, all connected.

“While there are about 100 billion neurons themselves, there are something like 100 trillion points where one neuron is connected to another in the brain.

“We can do the same thing with artificial neurons: we wire them up so they stack on top of each other, and the output of one becomes the input to another: this is an artificial neural network.”

Wait — how much power do you need to power a brain?

Hmmmm. A lot.

“Deep Blue, the supercomputer that famously beat chess champion Gary Kasparov in chess, was a 96 gigaflops machine.

“The “Big Sur” servers that we developed at Facebook for these neural networks is 56000 gigaflops — roughly a thousand times faster than the supercomputers that dominated not too long ago!”

So now I know everything about neural networks… right?

Patience, young grasshopper. Serkan?

“Well, this is a relatively simple explanation of it, but the artificial neural network is the key workhorse behind all of the advances in AI that you hear about.”

Thus, there’s a lot more to neural networks than that — not least, this question: what can you do with them?

Here is part two of Serkan’s talk on TOA.life, where he describes how neural networks beat that hypothetical infinite number of monkeys to writing Shakespeare, and how it helped blind people “see”…

If you enjoyed this article, please consider hitting the ♥︎ button below to help share it with other people who’d be interested.

This talk was edited for clarity and length.

Get TOA.life in your inbox — and read more from TOA’s network of thought-leaders:

Sign up for the TOA.life newsletter

I, OS: debug your mind’s code — and hack yourself happy: Founder of Selfhackathon Patrycja Slawuta explains how to reprogram our human code

It’s hard enough building a startup — why should you care about “doing good?”: sustainability strategist Susan McPherson says “doing good” is not simply about giving: it’ll grow your business too.

--

--

TOA.life Editorial
TOA.life

Welcome to interdisciplinary knowledge exchange. Welcome to Tech Open Air.