Unsupervised Meta-Learning Is All You Need

How to design meta-learning approaches for (1) unlabeled tasks or (2) unsupervised learning algorithms?

James Le
Published in
22 min readSep 18, 2020

--

Update: This post is part of a blog series on Meta-Learning that I’m working on. Check out part 1, part 2, and part 3.

Introduction

In my previous posts, “Meta-Learning Is All You Need” and “Bayesian Meta-Learning Is All You Need,” we have only discussed meta-learning in the supervised setting, in which we have access to labeled data and hand-specified task distributions. However, acquiring labeled data for many tasks and manually constructing task distributions is challenging and time-consuming. Such dependencies put conceptual limits on the type of problems that can be solved through meta-learning.

Can we design a meta-learning algorithm to handle unlabeled data, where the algorithm can come up with its own tasks that prepare for future downstream tasks?

Unsupervised meta-learning algorithms effectively use unlabeled data to tune their learning procedures by proposing their task distributions. A robust unsupervised meta-learner, once trained, should be able to take new and different data from a task with labels, acquire task-specific knowledge from the training set, and generalize well on the test set during inference.

This blog post is my attempt to explore the unsupervised lens of meta-learning and tackle the most prominent papers…

--

--