Novel Hybrid Continual Learning Algorithm Counters Agent Forgetfulness

Synced
SyncedReview
Published in
4 min readMar 25, 2020

If a human can play the guitar they can draw on some of those core skills when learning to play a zither. In machines, continual learning is designed to enable machines to do something like that — learn new tasks without forgetting previously learned ones. When new tasks are encountered, general strategies inform basic skills that can be used to perform task-specific learning.

Researchers aim to train artificial learning agents on the ability to do tasks sequentially under different conditions by developing task-specific and task-invariant skills. However, these approaches are not able to scale well to a large number of tasks due to the limited amount of memory available for each task.

A team from Facebook AI Research and UC Berkeley recently introduced a novel hybrid continual learning algorithm, Adversarial Continual Learning (ACL), which aims to enable the persistent explicit or implicit replay of experiences by storing original samples. The ACL method learns the task-specific or private latent space for each task and a task-invariant or shared feature space for all tasks to enhance better knowledge transfer as well as better recall of previous tasks. The model incorporates architectural growth to prevent the forgetting of task-specific skills, and uses an experience replay approach to preserve shared skills.

Adversarial Continual Learning (ACL) algorithm

Catastrophic forgetting can occur when representations learned through a series of tasks change to facilitate the learning of the current task, which leads to performance degradation. The ACL method breaks down a conventional single representation learned for a series of tasks into two parts: task-specific features and the core structure of all tasks.

To prevent the catastrophic forgetting of task-specific features, the ACL method uses compact modules that can be stored in memory. If the factorization approach is successful, the core structure can maintain a high degree of immunity. However, scientists have empirically found that the entanglement problem cannot be completely solved, as there is either little between tasks or the domain transfer is too large. The use of tiny replay frames containing a small number of old data samples is conducive to retaining higher accuracy and reducing forgetting.

The method was evaluated on commonly used benchmark datasets for T-split class-incremental learning, and established a new state of the art on 20-Split miniImageNet, 5-Datasets, 20-Split CIFAR100, Permuted MNIST, and 5-Split MNIST. The results show that adversarial learning can unlock shared and private potential representations along with orthogonality constraints so that compact private modules can be stored into memory to effectively preventing forgetting.

The paper Adversarial Continual Learning is on arXiv.

Author: Xuehan Wang | Editor: Michael Sarazen

To highlight the contributions of women in the AI industry, Synced introduces the Women in AI special project this month and invites female researchers from the field to share their recent research works and the stories behind the idea. Join our conversation by clicking here.

Thinking of contributing to Synced Review? Synced’s new column Share My Research welcomes scholars to share their own research breakthroughs with global AI enthusiasts.

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global