An End-to-End Library for Continual Learning Introduced By ContinualAI

Gayan Samuditha
Expo-MAS
Published in
5 min readDec 19, 2021

--

A research and development team from ContinualAI, including a large group of researchers from KU Leuven, ByteDance AI Lab, University of California, New York University, and other institutions, proposes Avalanche, an End-to-End Library for Continual Learning based on PyTorch.

Avalanche was born within ContinualAI with a clear goal in mind:

Pushing Continual Learning to the next level, providing a shared and collaborative library for fast prototyping, training and reproducible evaluation of continual learning algorithms.

As a powerful avalanche, a Continual Learning agent incrementally improves its knowledge and skills over time, building upon the previously acquired ones and learning how to interact with the external world.

We hope Avalanche may trigger the same positive reinforcement loop within our community, moving towards a more collaborative and inclusive way of doing research and helping us tackle bigger problems, faster and better, but together!

Albert Einstein once said that “wisdom is not a product of schooling, but the lifelong attempt to acquire it.” Centuries of human progress have been built on our brains’ ability to continually acquire, fine-tune and transfer knowledge and skills. Such continual learning however remains a long-standing challenge in machine learning (ML), where the ongoing acquisition of incrementally available information from non-stationary data often leads to catastrophic forgetting problems.

========================================

Gradient-based deep architectures have spurred the development of continual learning in recent years, but continual learning algorithms are often designed and implemented from scratch with different assumptions, settings, and benchmarks, making them difficult to compare, port, or reproduce.

Now, a research and development team from ContinualAI with researchers from KU Leuven, ByteDance AI Lab, University of California, New York University, and other institutions has proposed Avalanche, an end-to-end library for continual learning based on PyTorch.

The Avalanche Advantage

Shared & Coherent Codebase: Aren’t you tired of re-inventing the wheel in continual learning? We are. Re-producing paper results have always been daunting in machine learning and it is even more so in continual learning. Avalanche makes you stop re-write your (and other people's) code all over again with a coherent and shared codebase that provides already all the utilities, benchmarks, metrics, and baselines you may need for your next great continual learning research project!

Errors Reduction: The more code we write, the more bugs we introduce in our code. This is the rule, not the exception. Avalanche, let you focus on what really matters: defining your CL solution. Benchmarks preparation to training, evaluation, and comparison with other methods will be already there for you. This in turn, massively reduces the number of errors introduced and the time needed to debug your code.

Faster Prototyping: As researchers or data scientists, we have dozens of ideas every day, and time is always too little to execute them. However, if we think about it, most of the time spent in bringing our ideas to life is consumed in installing software, preparing and cleaning our data, preparing the experiments code infrastructure, and so on. Avalanche lets you focus just on the original algorithmic proposal, taking care of most of the rest!

Improved Reproducibility & Portability: One of the great features of Avalanche, is the possibility of reproducing experimental results easily and on any OS. Researchers can simply plug their algorithm into the codebase and see how it goes with respect to other researchers’ methods. Their algorithm, in turn, is used as a baseline for other methods, creating a virtuous circle. This is only possible thanks to the simple, yet powerful idea of providing shared benchmarks, training, and evaluation in a single place.

Improved Modularity: Avalanche has been designed with modularity in mind. As you will learn more about Avalanche, you will realize we have sometimes foregone simplicity in favor of modularity and reusability (we hate code replication as you do 🤪). However, we believe this will help us scale in the near future as we collaboratively bring this codebase into maturity.

Increased Efficiency & Scalability: Full-stack researchers & data scientists know this, making your algorithm memory and computationally efficient is tough. Avalanche is already optimized for you so that you can run your ImageNet continual learning experiment on your 8GB laptop (buy a cooling fan 💨) or even try it on embedded devices of your latest product!

check here…

Why Avalanche?

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Research Paper:

The researchers summarize their work as:

  1. Propose a general continual learning framework that provides the conceptual foundation for Avalanche.
  2. Discuss the general design of the library based on five main modules: Benchmarks, Training, Evaluation, Models, and Logging.
  3. Release the open-source, collaboratively maintained project at GitHub, as the result of a collaboration involving over 15 organizations across Europe, the United States, and China.
Continual lifelong learning with neural networks

Avalanche’s design is based on five principles:

1) Comprehensiveness and Consistency.

2) Ease-of-Use.

3) Reproducibility and Portability.

4) Modularity and Independence.

5) Efficiency and Scalability.

Comprehensiveness means providing continual learning with an exhaustive and unifying library with end-to-end support. A comprehensive codebase provides a unique and clear access point to researchers and practitioners, coherent and easy interaction across modules and sub-modules, and promotes the consolidation of a large community able to provide support for the library.

*******************************************************************

To improve Avalanche’s ease-of-use, the researchers have provided an intuitive Application Programming Interface (API), an
official website, and rich documentation with comprehensive explanations and vivid executable examples on notebooks.

GitHub

API Documentation

Avalanche enables researchers to easily integrate their own research into a shared codebase to compare their solution with previous results and speeds up the development process — thus securing both reproducibility and portability. Regarding modularity and independence, Avalanche guarantees the stand-alone usability of individual module functionalities and facilitates learning of a particular tool.

========================================

--

--

Gayan Samuditha
Expo-MAS

Software Engineer , Biologist, Techie, Try to Save the Human Being with Combination of Medical Informatics and AI