Summary of the 2nd “Continual Learning” Workshop at NeurIPS 2018
Highlights, Open Issues and Future Directions
Continual Learning is an exciting topic getting more and more attention lately in the AI community. While the “Continual Learning” workshop was born at NeurIPS in 2016, it did not take place in 2017 leading to its second and most successful edition in 2018, the 7th of December, just a few days ago, with more than 400 attendees and more than 80 submissions! 😮 🔥
It is worth mentioning that while undoubtedly being one of the most important and successful workshop to date existing on the topic, many have been proposed in the last few years on similar topics (sorry if I missed some of them), making clear the strong and growing interest on these ideas:
- 2nd Workshop on Lifelong Learning: A Reinforcement Learning Approach @ ICML/IJCAI 2018
- 1st Workshop on Continual Unsupervised Sensorimotor Learning @ ICDL-EpiRob 2018
- 1st Workshop on Lifelong Learning: A Reinforcement Learning Approach @ ICML 2017
- Lifelong Machine Learning and Computer Reading the Web @ KDD-2016
- Tutorial: Lifelong Machine Learning in the Big Data Era @ IJCAI 2015
In this brief blog post, I’ll try to summarize from my own point of view and, with no guarantees of completeness, the highlights of the 2nd edition of the CL workshop, especially for the people that could not attend the workshop this year! 😃 🚀
The Focus of the Workshop
In the 1st edition of the workshop, the focus was on defining a complete list of desiderata of what a continual learning (CL) enabled system should be able to do. The 2nd edition has been focused on:
- how to evaluate CL methods
- how CL compares with related ideas (e.g., life-long learning, never-ending learning, transfer learning, meta-learning) and how advances in these areas could be useful for continual learning.
Talks / Invited talks
The workshop started with an amazing introduction to the workshop by Mark Ring on its definition of Continual Learning, which he proposed back in the 90s with his pioneering dissertation “Continual Learning in Reinforcement Environments” and he’s continuing to develop now in the context of his Silicon Valley startup Cogitai!
He’s been followed by three spotlight talks by Razvan Pascanu, Natalia Díaz Rodríguez and Matthew Riemer. The first two more focused on benchmarks and evaluation of CL, and the second on the often ignored trade-off between transfer and interference of knowledge in systems that learn continually.
In the first invited talk Chelsea Finn was able to provide an extensive overview of the “meta-learning” perspective of Continual Learning, quickly followed by the inspiring talks by Raia Hadsell, and Marc’Aurelio Ranzato. I was impressed by Raia’s determined bet on Continual Learning as the true path towards AGI (I agree 😃). Marc’ Aurelio, instead, focused on the great importance of efficiency, as a fundamental algorithmic feature we cannot ignore in general, but especially for CL.
After lunch and an amazing poster session, the workshop resumed with the invited talks by Juergen Schmidhuber and Yarin Gal. I was impressed by the presence of Geoffrey Hinton during Juergen’s talk, which as you can imagine, was amazing! Yarin focused again on the robustness of evaluation of CL with great insights also contained in his recent paper “Towards Robust Evaluations of Continual Learning”.
The workshop proceeded with three spotlight talks by Siddharth Swaroop, Rahaf Aljundi and Shagun Sodhani (who highlighted several interesting aspects in Continual Learning when working with variational, regularization and sequence learning methods) and a final presentation by Martha White with an incredible delivery!
After the talks, and at the end of the workshop, an extensive (more than 2 hrs 😮) panel was held. The panel included Yee Whye Teh, Rich Sutton, Martha White, Juergen Schmidhuber, Marc’Aurelio Ranzato and Chelsea Finn.
Many have been the question answered in an informal (we had a lot of fun 😆) but exciting and super informative Q&A session! I think among the most important themes emerging after the talks and during the panel we could highlight two of them:
- What is Continual Learning and how is it related to RL?
- How to evaluate Continual Learning algorithms?
Which pretty much followed the focus of this 2nd workshop edition and still represent two major issues for which the community has yet to find a common agreement.
Nomenclature and definition debate
As I already discussed in my previous blog post “Why Continual Learning is the key towards Machine Intelligence” in 2017 (see nomenclature section), the term “Continual Learning” may assume different meanings for different researchers. This was essentially due to the lack of a formal definition for many years and very similar concepts floating around under different names and aiming at different goals.
For Richard Sutton, Mark Ring and Juergen Schmidhuber, CL is essentially a sub-part of Reinforcement Learning (see formal definition proposed by Mark Ring in the paper “Toward a Formal Framework for Continual Learning”, 2005).
For others like Raia Hadsell, Razvan Pascanu and Marc’Aurelio Ranzato it is essentially a synonymous of Lifelong Learning and can be seen as orthogonal to the supervised, unsupervised and reinforcement learning paradigms; simply a set of properties and desiderata of systems that never stop learning. Convergence seems still far to be reached.
Given the novelty of the topic, especially in the deep learning community, many CL papers in the supervised and reinforcement learning context evaluate newly proposed techniques with different benchmarks/environments, metrics and evaluation schemes.
Even though, also in this context, a common setting has yet to emerge, it seems that many of the panelists and the workshop attendees agreed on the need a very large set of benchmarks and more various evaluation metrics for assessing Continual Learning within different scenarios and taking into account several factors worth considering for real systems that learn continually (that’s why we proposed CORe50 and new metrics beyond assessing catastrophic forgetting).
The 2nd edition of the “Continual Learning” workshops has been an amazing opportunity to connect with people working on this fast emerging topic that many of us consider fundamental for the future of AI.
I think events like this are of paramount importance to exchange ideas, fuel the progress and let the emergence of a shared understanding of open issues and possible solutions in this field.
I’d like to deeply thank Mark Ring, Razvan Pascanu, Yee Whye Teh, Marc Pickett, the organizers of the workshop, and all the PCs that have made this possible.
I really hope to participate to more events like this in the future. In the meantime, I invite you to join ContinualAI.org: an open community (soon to become an official non-profit research organization) of more than 230 researches and enthusiasts working together on the topic of Continual Learning and on many open-source projects! join us on slack today! 😄 🍻
If you’d like to see more posts about my work on AI and Continual/Lifelong Deep Learning follow me on Medium and on my social: Linkedin, Twitter and Facebook! If you want to get in touch or you just want to know more about me and my research, visit my website vincenzolomonaco.com or leave a comment below! 😃