Jürgen Schmidhuber: True Artificial Intelligence Will Change Everything

Synced
SyncedReview
Published in
7 min readJul 10, 2017

From May 27–28, Synced, a world-leading AI information and service platform, hosted the first Global Machine Intelligence Summit (GMIS). Professionals from all over the world gathered in Beijing to present their insights into the AI industry. Spanning two days, the conference was jam packed with events, including 32 presentations by 47 expert presenters, five breakout sessions, four panel discussions and one human vs. machine competition.

Renown figures, including “The Father of LSTM,” Jürgen Schmidhuber; the author of Artificial Intelligence: A Modern Approach, Stuart J. Russell; and Chief AI Officer at Citadel, Deng Li, were part of a host of AI experts that spoke at the inaugural GMIS summit in Beijing.

On the second day of GMIS, Jürgen Schmidhuber was the first to take the stage. Dressed liked a scientist-cyborg, in a white outfit with a smartphone camera peeking out over the crowd from his breast pocket, he took the audience on a quick journey through artificial intelligence in the 20th century, before making his own predictions in a presentation entitled, “True Artificial Intelligence will Change Everything.”

Jürgen Schmidhuber’s foray into artificial intelligence started in the 1970s with a lofty vision, “create an artificial intelligence that is smarter than me and retire.” He wanted to build machines that can teach themselves. His decades of research have now led to the technology that powers the voice assistants and translators in our smartphones.

Schmidhuber is Co-Director of the Dalle Molle Institute for Artificial Intelligence Research (IDSIA) in Manno, Ticino, southern Switzerland. There, he also teaches at the University of Lugano and Lucerne University of Applied Sciences and Arts. He completed his undergraduate studies at Technische Universität München in Munich, Germany in 1987, and continued to finish his doctoral studies in 1991.

Once his studies were complete, he dived into the field of deep learning neural networks and emerged as a pioneering scientist. His research team at IDSIA and Technische Universität München invented a recurrent neural network model that has won international awards. He was awarded the Copley Medal from the 2013 International Joint Conference on Neural Networks and the 2016 Neural Network Pioneer Award from the Institute of Electrical and Electronics Engineers.

Schmidhuber’s interest in artificial intelligence began during his years as an undergraduate student. His thesis presented a robot that can teach itself and improve on its own systems. The project introduces the idea of meta-programs which operate behind the computer system to automatically revise programming codes. Meta-programs can improve specific functions and revise or even change their own learning algorithms. Building this type of self-optimized artificial intelligence was Schmidhuber’s goal, paving the way for his later research on recursive, self-optimized algorithms.

The vision for this model came to Schmidhuber in the 1970s. It motivated him to study math and computer science at college. In his talk, he mentioned that his role model was Albert Einstein. There was a moment when he realized the if he could build something smarter than himself, or even Einstein, he could make a serious impact. In 1987, this formed the basis for his thesis and lead him into a myriad of subjects. Nowadays he sees his vision approaching reality.

As his work progressed, Schmidhuber developed famous deep learning techniques like Long Short-Term Memory. In 1997, he worked with Sepp Hochreiter to write a paper, that introduces a method for using memory functionality to enhance an artificial neural network’s capability. It is essentially a computer system that imitates the human brain by remembering previous information to help it understand something new. The neural network loops information, adding previously obtained text or images to a new context to optimize the computer’s interpretation. They called this method Long Short-Term Memory, or LSTM.

Schmidhuber believes that LSTM functions similarly to the human brain. There are more than one million neurons in our cerebral cortex. Each neuron works like a small processor. Some take care of inputs, some handle image capturing and others are for processing thoughts. Humans also have nerves to capture pain and control muscles. These units are connected and communicate with each other when a task is executed. The strength of this connection fluctuates as the human learns. This is called “persistent connection,” and it inspired Schmidhuber’s idea for LSTM.

Nowadays, LSTM is used in many technological applications. Computer systems that use LSTM can learn complicated tasks like language translation, image analysis, file extraction, speech recognition, image recognition, handwriting recognition, chatbot control, music synthesizing as well as disease, click rate and stock predictions. Schmidhuber notes that Google drastically improved its speech recognition functionality in Android phones and other devices by using an LSTM program trained by Connectionist Temporal Classification (CTC). This method was published by Schmidhuber’s lab in 2006.

Baidu uses CTC for their speech recognition products as well. The Apple iPhone uses LSTM in QuickType and Siri. Microsoft not only uses LSTM for its speech recognition technology, but also for its Photo-Real Talking Head and code writing applications. Amazon’s Alexa also communicates with its users through bidirectional LSTM. Google uses the model in a much wider range of applications, such as image caption generation, automatic email answering, and its new personal assistant Allo. Since 2016, LSTM has also dramatically improved Google Translate. According to IDSIA, “a substantial fraction of the awesome computational power in Google’s datacenters is now used for LSTM.”

As he explained his invention, Schmidhuber addressed his listeners, “The Long Short-Term Memory, maybe some of you don’t know it, but you have it in your pockets. You have it in your pocket because now it’s doing all the speech recognition in [almost] all the smartphones. … Lots of other applications [also] exist.”

Schmidhuber’s research teams have won awards in various machine learning competitions, including medical image recognition. In fact, machine learning methods have already outperformed human experts in many tasks and have the potential to become real-life applications.

“[The image above] is a slice of a female breast and some of the cells you see there are dangerous. They are in pre-cancer stage. Mitosis cells, as they are called.” Schmidhuber continued, “Normally our need a trained doctor to look at all these images to say, ‘this is dangerous, this harmless’ and so on… We could train our neural networks to do the same thing. Of course, that was at a time when computers were ten times more expensive than they are now. Today for the same price, we can do ten times as much.”

This video shows an experiment with a humanoid robot that learns to build a repertoire of skills (topple, grasp, place) from raw-pixel data, driven by its own curiosity and no prior knowledge. It is the first one to do so. https://www.youtube.com/watch?v=OTqdXbTEZpE

After LSTM, Schmidhuber’s team moved on to all-purpose artificial intelligence projects. In 2015, they invented a self-learning humanoid robot. This robot can use its machine arms to interact with its environment and learn concepts like gravity. The project has become a milestone in Schmidhuber’s pursuit of self-learning machines. He predicts that in the coming years, humans will be able to create systems that are as intelligent as primates. Considering that artificial intelligence has only been under development for 70 years, this is an incredible feat if compared to the billion-year evolution of life as we know it.

Schmidhuber also talked about his artificial intelligence company, Nnaisense, the product of Schmidhuber’s labs in Munich and Switzerland. “Nnaisense” got its name from “Nascence”, a term related to universal AI that uses neural networks (NNAI). Nnaisense is made up of five co-founders (Faustino Gomez — the CEO, Jan Koutnik, Jonathan Masci, Bas Steunebrink, and Schmidhuber ), consultants (Sepp Hochreiter, Marcus Hutter, and Jaan Tallinn) and first-class engineers and scientists.

The company has already launched profitable projects in the manufacturing and finance sectors. Nnaisense’s vision is to view present success as a small beginning — in the distant future their achievements can still be surpassed by using meta-learning and machine curiosity. These pioneering methods can continue to be used to optimize the efficiency of search engines and large-scale reinforcement learning neural networks.

At the end of his speech, Schmidhuber gave his thoughts on the future. In 2014, he calculated that the time span between each important event in the history of the universe has decreased exponentially. These events are now happening in a quarter of the time that they used to occur. Studying this pattern, it would appear that the next big event is set to take place in 2030.

What will follow? It’s hard to predict. Schmidhuber believes that eventually artificial intelligence will replace humans in space exploration. That might be far off, however, this latest technological revolution is sure to transform our lives.

Author: Jiaxin Su | Editor: Nicholas Richards

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global