Yann LeCun: Obstacles on the path to AI

Barely anyone connects more tightly with deep learning than Yann LeCun. He is a famous computer scientist with primary research in machine learning, computer vision, robotics, and computational neuroscience. LeCun is also best known for his research on optical character recognition and convolutional neural networks(CNN), who is also known as the father of convolutional nets.

LeCun is best known for his research on optical character recognition and convolutional neural networks(CNN).

In late 80s of last century, he joined Bell Lab, developed convolutional network method, which can be extremely useful to handwriting recognition systems: most of the check signatures in the United States are verified by this method. In the middle of 90s, even though neutral networking was in a recession, LeCun is one of the few scientists that remain confident. In 2003, he became a professor in New York University Center for Data Science, contributing a lot in deep learning. In 2013, he joined Facebook’s new found lab called Facebook AI Research, and led the artificial intelligence research in Facebook, which focused on natural language processing(NLP), computer vision, and pattern recognition, etc. Synced had edited and translated a IEEE exclusive interview with LeCun, click here to read.

On October 2015, LeCun attended Bay Area Machine Learning Symposium in California, USA, and delivered a speech titled Obstacle on the path to AI.

Bay Area Machine Learning Symposium is a conference for all the machine learning scientists in San Francisco Bay Area, aiming at promoting community building between academic research and industrial institutions. The organizing committee includes Samy Bengio from Research at Google, Enda Wu from Baidu, Joaquin Quinonero Candela from Facebook, etc. 
 
LeCun gave his speech a subtitle, “How I learned to stop worrying and love unsupervised learning,” and showed an optimism attitude toward unsupervised learning. He thinks it will be a powerful tool to establish Strong AI.

Here we highlight the key points of the speech :

1.Introduce to the corporation between representation learning , reasoning and decision-making. The goal of Representation learning is to seek for the better way to build appropriated model based on massive unmarked data.

2. The disadvantage of reinforcement learning is the billions of parameters with representation learning that obviously slow down the machine learning. Reinforcement learning , a branch of machine learning , emphasizes to maximize imaginary profit in accordance with the time and the place. It is inspired by the behaviorist learning theory from psychology, that the relatively permanent change in behavior brought about as a result of experience or practice. The focus of the behavioral approach is on how the environment impacts over behavior. Yann metaphorically refer reinforcement learning to “ the cherry on the cake”, which sounds like trolling, but there is no possible to learning billions of parameters within a reasonable amount of time.

3. The application of unsupervised learning in natural language process( eg. phrase nesting , semantic combining , QA system ). There are two main leaning model of machine learning : supervised and unsupervised . Supervised is applied for marked database, for example , the image database with specific comments and description; the other is unsupervised learning , with which computer must to search for the unmarked subsets, subgroups from different panels.

4.Memory Networks: how to simulate a hippocampus-liked memory module ? There are two hippocampus in the human’s temporal lobe brain, but separate regions of two hemisphere. Hippocampus, as part of cerebral lambic system, applied the function of memory control and space orientation. LeCun states that, the current networks cannot remember things that long, the cortex only remember things for 20 seconds. Thus , we need a hippocampus as a separate memory model. Jason Weston ‘s memory networks is suggested as a research review.

From LeCun ‘s perspective , unsupervised learning possesses a lot of advantages since it is modeled on a real world. This world is uncertain and vivid, rather than formalized and invariable. Thus, machine learning cannot be applied on every objectives. How to build an unsupervised learning with non-sample related is addressed LeCun’s attention. In the Greek mythology , Sisyphus returns toward his rock, and the sorrow is back to the beginning. Now, confront with the non-sample related method , LeCun is the Sisyphus in unsupervised learning filed. Can he roll the stone up the hill of his expectation? Only time can tell.

Report Members: Jiaxin Su, Rita Chen, Chain Zhang | Editor: Synced