Geoffrey Hinton on Images, Words, Thoughts, and Neural Patterns

Synced
SyncedReview
Published in
3 min readJan 31, 2018

The “Godfather of AI” Professor Geoffrey Hinton told a packed room of professionals and students today that machine learning is reinventing our relationship with our own thoughts. The Professor mused on AI, deep learning, and the nature of intelligence in his 60-minute talk at the University of Toronto.

Professor Hinton has been contemplating the functioning mechanism of brains for decades. In the talk he asked the question “What is a thought?” His response: “It is a big neural activity pattern. We refer to it by using a symbol string that causes it, but the way we refer to it is quite different from what it is.”

The Professor comes from an illustrious family of forerunners and scientists. His great-great-grandfather George Boole came up with Boolean logic, which made modern computing possible; his great-grandfather was a mathematician; and his father’s cousin Joan Hinton was a nuclear physicist and one of the few women scientists who worked for the Manhattan Project in Los Alamos.

Professor Hinton rose to fame at the 2012 ImageNet object recognition challenge, which comprised one million high-resolution training images of 1,000 different classes of objects. For the task, Hinton’s team designed AlexNet, an eight-layer network with five convolutional layers, which performed an impressive 10% better than the University of Tokyo runners-up.

Professor Hinton’s success applying convolutional neural networks to pixels was soon repeated with words, revolutionizing the field of machine translation, which had been a nagging problem for symbolic AI. The artificial neural network method “converts the input sentence into a big pattern of neural activities that is language independent. The pattern is a thought vector, which will be converted into a sentence in the target language,” explained Professor Hinton.

In order to deal with sequential language, text data is fed into an encoder recurrent neural network (RNN) and a decoder RNN. This is the method Google Translate uses to translate some 100 billion words daily. The rise of neural networks has been good news for machine translation, but, as Professor Hinton quipped, “very bad news for linguists like Chomsky who insist that language is innate and you can’t learn it through data.”

Professor Hinton also spoke on the human thinking process and analogical reasoning, and philosophized on the nature of our sensory data — whether all we see, hear and even think could be represented in ways different from how we understand them to be. “Sentences can evoke thoughts but they are nothing like thoughts. An array of pixels can evoke a scene, but our representation of a scene is not an array of pixels.”

Journalist: Meghan Han | Editor: Michael Sarazen

Sync AI @ Synced | 机器之心 | Twitter

Dear Synced reader, in our latest “Trends of AI Technology Development Report”, our tech analysts cut through the technical jargon and misleading media coverage to break AI down into easily understandable parts. Whether you are a researcher, tech enthusiast, entrepreneur, investor, or student, the report will help you sort out the basics and provide you with the necessary background to move forward.

Click here to get the full report and subscribe to our upcoming AI newsletter.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global