Language Understanding with Memory Networks
Despite recent advances in AI, a deep understanding of natural language by machines still remains highly challenging. Antoine Bordes, Research Scientist at Facebook Artificial Intelligence Research (FAIR), is working to change this with “memory networks”.
Memory Networks is an attention-based neural network architecture that operates an external symbolic memory component to perform reasoning. This can achieve interesting performances on various tasks of question answering and dialogue management, and appears to be a promising avenue towards a better machine comprehension of language.
At the Machine Intelligence Summit in New York on 2–3 November, Antoine will give a presentation on Language Understanding with Memory Networks. I asked him some questions ahead of the summit to learn more about the recent advancements in AI and what we can expect next.
What are you working on at Facebook AI Research?
I work at FAIR, the AI research lab of Facebook. Our mission is to advance the development of AI-based technologies. We have the freedom to work on ambitious long-term research programs, much like academic labs do, but with Facebook’s unique infrastructure and scale. For technology transfer, we work closely with Facebook’s Applied Machine Learning group, that is in charge of applying AI at scale within Facebook products. My line of work is built around designing neural networks that can learn to understand language, for building question answering and dialogue systems.
What do you feel are the leading factors enabling recent advancements in AI?
Many of the recent breakthroughs are related to Deep Learning and used algorithms that have been known for a while. Their excellent performance has been jointly made possible by the huge increase of computing power (with the development of GPU computing for instance) and by the creation of large labeled datasets (such as ImageNet for image recognition). Both factors made the training of very large deep neural networks possible and successful. Now the time is at the development of new neural networks architectures and training mechanisms to tackle the many remaining challenges such as language understanding or unsupervised learning.
What present or potential future applications of machine intelligence excite you most?
The new frontier of AI right now lies at the understanding of the world and of natural language by machines. We have yet to discover how to teach machines to communicate with humans. This is extremely hard and seems to be only possible if machines can have an understanding of how the surrounding world works. This involves many factors from grasping the laws that drives the motion of objects to feeling why teenagers and baby boomers react differently, for instance.
What developments can we expect to see in machine intelligence in the next 5 years?
I think that dialogue systems and more generally language-based interfaces will get much better. I also expect that those interfaces will be personalized for each user and will act as trusted assistants to navigate the digital world.
Other speakers at the summit include Kamelia Aryafar, Senior Data Scientist, Etsy; Avneesh Saluja, Machine Learning Scientist, Airbnb; Tara Sainath, Senior Research Scientist,Google; Kathryn Hume, President, Fast Forward Labs; and Siddartha Dalal, Chief Data Scientist, AIG.