Serkan Piantino, TOA Berlin 2016

Facebook’s neural network beat the infinite number of monkeys — and wrote Shakespeare

TOA.life Editorial
TOA.life
Published in
6 min readFeb 14, 2017

--

  • Serkan Piantino, former Facebook AI co-lead, explains AI’s astonishing leaps beyond the limits of human ability — and where its own limitations lie.
  • “We made a network trained to learn Shakespearean text — and now we can make it generate completely synthetic ‘Shakespeare.’”
  • If AI is now better than humans at many visual tasks, and can generate original art, should creatives concerned for their jobs?

Humans are no longer #1 — well, not at everything, at least. AI now beats us hands-down at many visual recognition tasks, for instance — and while that is probably a positive (it’ll give us self-driving cars), how do we feel now that AI can create art?

Serkan Piantino, former co-lead of AI at Facebook, and founder of deep-learning hardware company Top 1 Networks seems calm enough about our new reality.

For Serkan, AI understanding how to create new writing and visual art means that our world will be augmented by better understanding, more enjoyable technology experiences and greater enjoyment of beauty.

In part one, we heard Serkan explain in simple terms how AI worked by copying the brain’s own functions — now, in part two, we’ll look into how these functions can be turbo-charged to do things we can’t do ourselves.

It’s a brave and exciting new world , so check out the video and the key points below, and consider this: if a computer could create brand new Shakespeare plays as good as the originals, would you read them?

OK — I read part one and understand how you’re basing code on the brain. So what cool stuff you can do with this this type of technology and how will it make a difference to my life?

Well, it turns out that neural networks have been part of our lives for a while, as Serkan explains:

“One of the first convolutional neural networks was used for handwriting recognition — there was a time in the 1990s where this network was reading about 50% of all cheques written in the USA.

“Visual perception is something humans take for granted. We can look at a scene and we immediately understand what’s going on, but computers have struggled with this. In fact, we are so sure that computers can’t do visual visual perception tasks that we use them — in the form of CAPTCHA tests — to “make sure that you’re not a robot”.

“But now we can feed every pixel of the image into a stack of these neural network models — and at the end we get a label for the image.

“These neural network filters look for patterns, and it tracks where they appear in the image — and then creates new images to represent those that it filters and so on, until it’s whittled the image down to a small set of factors that it can then make a decision about.”

Facial recognition powered by neural networks is far beyond what we would consider human performance.

How much of an image can a neural network understand — and what does it do with the information?

Here’s where we start veering into Sci-fi territory: it turns out that AI can not only figure out what an image is, but also what’s in it. And it’s better at doing it than, er, we are:

“Facial recognition that’s powered by neural networks is far beyond what we would consider human performance.

Serkan Piantino, speaking at TOA Berlin 2016

“And something that was not even close to possible just two years ago was being able to segment and label individual objects within one image. We can label a stream of images coming into Facebook’s servers, and separate all the ones that contain fireworks, or food, or cats.

“This is the cutting edge, and we’re also seeing close to human performance on this type of task now. It can label which parts of the image are the road, which parts are other vehicles, and which parts are pedestrians, buildings, or sky.

“If you are lucky enough to own a Tesla there is a small chip that runs this type of network whenever you turn on Autopilot.”

But AI will never match humans at creative tasks will they? We need humans to make all our art, writing and music… right?

Hmm, perhaps we should all start learning how to programme neural networks instead. Serkan’s AI has beaten the hypothetical infinite monkeys to the task of writing new Shakespeare:

“One of the advances AI has made is that it can understand a sentence. We made a network that was trained to learn the pattern of its inputs based on Shakespearean text — and now we can make it generate completely synthetic “Shakespeare.”

“This is what it comes up with — pretty convincing Shakespearean text:

Slide from Serkan’s talk at TOA Berlin 2016

“The network creates different characters, it gets the tense correct, and it’s getting iambic pentameter roughly correct. Pretty impressive stuff, and we are working toward dialogue systems that we can all interact with: chatbots, for instance.

“We trained a network on a corpus of text, and then had it generate synthetic examples of Facebook posts — and we got much more funny results. Here’s a bunch of random Facebook posts that are completely synthetically generated:

Slide from Serkan’s talk at TOA Berlin 2016

“It understands some things and gets some of the structure… but it really doesn’t understand the world quite as well. My favourite post is, “I have an urgent Candy Crush exam.”

You can see that it understands that “urgent” is about “exam,” and that “Candy Crush” is something people do in class… but it doesn’t understand why it might not work together!

OK. This is powerful stuff, although maybe in hindsight that Creative Writing degree I took was a mistake. Tell me the good news!

“We’re combining visual perception with symbolic understanding to bring these two techniques together. Networks can process the pixels [as image recognition], but then go through language generation networks.

“This means we can plug in photographs and generate captions. It’s trained on what people might actually caption a photo with.

Slide from Serkan’s talk at TOA Berlin 2016

“For this image, it generated this caption: “A group of people standing shopping at an outdoor market. There are many vegetables at the fruit stand.”

“This is pretty convincing, and something that we deployed to make it a little bit easier for blind users to understand the stuff that was scrolling through their newsfeed.”

In his talk, Serkan merely scratched the surface of the incredible opportunities that neural networks bring. If you’ve not yet watched his full talk, now’s your chance. Learn about how AI can create Anime or album art from scratch, how it remembers information in a “human” way, and how it learns languages that it hasn’t been trained to understand. It’s fascinating, astonishing and creative — a uniquely “TOA” talk.

In Part One of Serkan’s talk, we explored how neural networks are created to mirror the working of the human brain at the tiniest scale. It’s your simple primer to a wild new world.

If you enjoyed this article, please consider hitting the ♥︎ button below to help share it with other people who’d be interested.

This talk was edited for clarity and length.

Get TOA.life in your inbox — and read more from TOA’s network of thought-leaders:

Sign up for the TOA.life newsletter

I, OS: debug your mind’s code — and hack yourself happy: Founder of Selfhackathon Patrycja Slawuta explains how to reprogram our human code

It’s hard enough building a startup — why should you care about “doing good?”: sustainability strategist Susan McPherson says “doing good” is not simply about giving: it’ll grow your business too.

--

--

TOA.life Editorial
TOA.life

Welcome to interdisciplinary knowledge exchange. Welcome to Tech Open Air.