Berlin: Breaking the Wall Between Human and Artificial Intelligence

Synced
SyncedReview
Published in
3 min readNov 10, 2017

Artificial intelligence is on every country’s development agenda, alongside technologies such as robotics, neuroscience, and quantum computing. At the International Conference on Future Breakthroughs in Science and Society in Berlin, leading AI researcher Dr. Jürgen Schmidhuber spoke on the November 9 anniversary of the fall of Berlin Wall. Much like that epoch-making event 28 years ago, AI technology is breaking down limitations and opening up new opportunities.

Dr. Schmidhuber speaking at the 2017 Berlin Falling Walls Conference

A long-time advocate of artificial general intelligence (AGI), Dr. Schmidhuber embarked on his research career in 1987 with a diploma thesis describing “meta-learning” programs, which can revise or change their own learning algorithms. In 1997, he co-authored a paper with Sepp Hochreiter introducing memory functionality to enhance a neural network’s functionality, and pioneering the methodology that would become known as “Long Short-Term Memory” or LSTM. The methodology was further developed in 2000 by Dr. Schmidhuber and his PhD students Felix Gers and Fred Cummins, which helped to improve improved speech and handwriting recognition capabilities. To date, this recurrent neural network has proved to be extremely efficient.

Dr. Schmidhuber says LSTM functions similarly to the human cortex. There are more than hundred billion neurons in our cerebral cortex, and each works like a small processor. Some take care of inputs, some handle image capturing and others are for processing thoughts. Humans also have nerves which capture pain and control muscles. These units are connected and intercommunicate when a task is executed. The strength of these connections fluctuates as the human learns. This is called “persistent connection,” and it was the inspiration behind Dr. Schmidhuber’s early LSTM research.

Dr. Schmidhuber notes that Google drastically improved its speech recognition functionality in Android phones and other devices by using an LSTM program trained with Connectionist Temporal Classification (CTC), a method first published by Dr. Schmidhuber’s lab in 2006. Today, LSTM also powers QuickType and Siri on almost one billion iPhones. And of course, Facebook, which performs over 4.5 billion translations every day, also relies on this inbuilt model.

Post-LSTM, Dr. Schmidhuber and his team moved on to all-purpose AI projects. As the industry has begun to reap the benefits of deep learning, ambitious researchers are shifting focus to more fundamental problems, and also zooming out to the big picture and the big prize in all of this: human-level artificial general intelligence.

Dr. Schmidhuber’s view of artificial general intelligence rests on something much more broadening, which involves having artificial intelligence soaring into space and starting its own species. Is it too early for such optimism, or is it just the time for us to contemplate grander themes? For more of the conference visit: http://falling-walls.com/conference

Journalist: Meghan Han | Editor: Michael Sarazen

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global