Google I/O 2019 | Geoffrey Hinton Says Machines Can Do Anything Humans Can

Synced
Synced
May 10, 2019 · 5 min read
Image for post
Image for post

Artificial intelligence is closing the gap on humans. Machines are rapidly honing their skills in object recognition and natural language interaction, and advanced AI agents have already beat human champions in board and video games and even debates. But surely some human skills will remain beyond the reach of bots? “Godfather of Deep Learning” Dr. Geoffrey Hinton thinks not. “We humans are neural nets. What we can do, machines can do,” he told a Google I/O audience yesterday.

Few figures in the artificial intelligence community are as respected as Dr. Hinton. The room was packed when he took to the stage at the Google I/O 2019 in Mountain View, California for a 40-minute chat with Wired Editor-in-Chief Nicholas Thompson. Dr. Hinton also reflected on his 40-year journey in deep learning and outlined his latest research on capsule networks.

While there are theoretically different ways to create intelligence machines, Dr. Hinton is a firm believer in deep neural networks, an approach inspired by the structure of the human brain. “Neural networks have connections. Each connection has a weight on it and that weight can be changed through learning. What a neural net does is take activity from the connection times weights and sum them up and decide whether to send outputs.”

Of all the objects in this universe, the human brain is one of the most complex, comprising some 100 billion interconnected neurons. We know that everything we sense from the outside world through vision, sound, smell, taste and touch are signals transmitted through synapses between these neurons. The deep mechanisms in our brains however remain mysterious, and researchers have a long way to go before they can hope to fully understand these inner workings.

That hasn’t stopped computer scientists from modeling their machines on brains. Dr. Hinton pointed to pioneer Alan Turing, whose unorganized machine theory introduced some 70 years ago was an example of a randomly connected binary neural network and ushered in the development of neural machine systems. In the 1980s Dr. Hinton achieved an historic breakthrough in neural networks by introducing back propagation, which enabled efficient training of artificial neural networks and dramatically improved their performance.

Neural network research may also shed light on other human brain activities. Dr. Hinton recounted how in the 1980s he and other researchers proposed Boltzmann Machine training algorithms alternate between a learning phase to strengthen network connections and an unlearning phase to decrement the connections. A similarity was discovered between Boltzmann Machine behaviour and human dream states, which Dr. Hinton characterized as a sort of unlearning process.

In the 1990s, deep learning progress stagnated in large part due to relatively small datasets and insufficient compute power. Support vector machines (SVM) performed better than neural nets as an efficient discriminative classifier using small amounts of labeled data.

Dr. Hinton however did not waver from his commitment to neural networks. “There are different ways of learning connection strengths. The brain uses one of them. Certainly you have to have some way of learning these connection strengths, and I never doubted that,” he told the I/O audience.

The recent and rapid advances in deep learning started with Dr. Hinton’s 2006 paper A fast learning algorithm for deep belief nets, which showed how a deep belief network with many hidden layers could create a well-behaved generative model representing the joint distribution of handwritten digit images and their labels. Dr. Hinton then applied deep architectures to speech recognition, molecular prediction, and computer vision in his renowned 2012 NIPS paper AlexNet, which leveraged convolutional neural networks and GPUs to effectively recognize images on the ImageNet dataset.

Some of deep learning’s remarkable achievements, particularly in the area of natural language processing, surprised even Dr. Hinton: “If you told me in 2012 that in the next five years we will be able to translate between different languages using the same technology, recurrent networks and just stochastic gradient decent, I wouldn’t believe you.”

Today, deep learning competes favourably with other technologies in fields such as robotic control, but not so much in abstract reasoning — the ability to identify patterns and rules learned from data and apply them to solve new problems. Dr. Hinton sees abstract reasoning as the last major hurdle for neural nets to overcome on the road to humanlike intelligence.

In his talk Dr. Hinton suggested there is nothing the human brain does that artificial neural networks will not eventually be able to do — even for example achieving emotion or consciousness: “One hundred years ago if you asked people what is life, then they’d say ‘living things have vital force, that’s the difference between being alive or dead’. Now we think it’s a prescientific answer once you understand the biochemistry. I believe consciousness is an attempt to explain mental phenomenon with some special essence.” Dr. Hinton suggested that fully understanding how humans do the things they do could enable machines to effectively recreate even this “special essence” said to inform consciousness.

Nicholas Thompson, Dr. Hinton’s partner in the chat, challenged the idea that machines could learn to perform any and all human brain activities: “There is no emotion that couldn’t be recreated? There is nothing of humans that couldn’t be recreated by fully functional neural networks? And you are 100 percent confident on this?”

Dr. Hinton replied that he was “99.9 percent sure.”

“What about that 0.1 percent?”

“We might be in a big simulation,” quipped Dr. Hinton, evoking a hearty round of laughter and applause from the audience.

Journalist: Tony Peng | Editor: Michael Sarazen

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

Image for post
Image for post

Follow us on Twitter @Synced_Global for daily AI news!

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Image for post
Image for post

Synced

Written by

Synced

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global

SyncedReview

We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.

Synced

Written by

Synced

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global

SyncedReview

We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store