Generalizing from Few Examples with Meta-Learning

Interview with Hugo Larochelle

Hugo Larochelle is a Research Scientist for Google Brain’s Montreal team, Adjunct Professor at Université de Sherbrooke, Adjunct Professor at Université de Montréal, and Associate Director for Canadian Institute for Advanced Research. Larochelle co-founded Whetlab (acquired by Twitter) and later worked in Twitter Cortex’s group as a Research Scientist. He received his Ph.D from Université de Montréal under the supervision of Yoshua Bengio. He is most interested in applications of deep learning to generative modeling, meta-learning, natural language processing and computer vision.

With The Best had the opportunity ask him a few questions about his work and expertise in deep learning and neural networks.

Q: How has deep learning changed since you first started as a student?

Well, when I first started as a PhD student, “deep learning” wasn’t even a common expression, so it has changed a lot! We weren’t yet using GPUs to train models and were mostly focused on developing unsupervised pre-training algorithms (RBMs, autoencoders, etc.). There weren’t a lot of mainstream tools for implementing neural networks (Theano didn’t even exist yet) and arXiv wasn’t as popular a medium for science dissemination in deep learning. In fact, the largest machine learning conferences would feature only a handful of papers on neural networks.

Q: What are the biggest challenges with deep learning and neural networks?

One big challenge is developing valuable theories of properties of neural networks. A lot of our understanding of neural networks comes from experimentation, as opposed to theorems. This makes it harder to reliably deepen our understanding of these methods, since only glancing at the mathematical formulation of neural networks doesn’t provide as much insight as one might expect. Instead, we must design experiments on many datasets, which don’t even provide solid guaranties that insights coming from these will generalize to other datasets not covered by these experiments.

Q: Earlier this year you released a dataset called GuessWhat?! Do you see the dataset evolving from answering ‘Yes/No’ to answering with full sentences and description?

GuessWhat?! is a great project, for which my collaborators Harm de Vries, Florian Strub, Sarath Chandar, Olivier Pietquin and Aaron Courville deserve a lot of credit as well.

There’s actually already a lot of work on visual question answering that goes beyond “Yes/No” questions. Devi Parikh, Dhruv Batra and collaborators have done a lot of very interesting work there, including on multi-turn visual dialog where responses go beyond “Yes/No”.

So the next challenge for this kind of work might instead be in discovering how such datasets / tasks can be useful for learning better image / text representations, that can be useful to improve computer vision / natural language understanding systems in general.

Q: What trends do you see happening in the future of deep learning?

I’m particularly excited about few-shot learning, i.e. the problem of designing methods that are able to learn new concepts from a handful of examples (as opposed to many thousands). Right now, deep learning methods based on the idea of meta-learning seem promising. In meta-learning (also referred to as “learning to learn”), the deep learning model is itself a learning algorithm, that is end-to-end trainable. The hope is that by (meta-)training this model on a lot of different few-shot learning tasks, we will be able to learn a procedure that can “understand” new concepts from a handful of examples.

Q: What advice do you have for students and newbies to the deep learning world?

I would recommend finding a good balance between coding and reading papers, as both are important to become a strong deep learning researcher. For coding, thankfully there are plenty of open source implementations of deep learning models that one can start and tinker with. For reading papers, the pace of deep learning research right now makes it important to keep an eye on arXiv preprints as they are made available, instead of waiting for conference proceedings. Thankfully, tools like Andrej Karpathy’s arXiv-sanity (, which can recommend papers based on your preferences) and social media (where you can follow researchers that work on the same topic as you) makes the task of keeping up with work that’s relevant to your research a bit less daunting.

Finally, to start with, I would suggest perhaps focusing on only one topic in deep learning. The videos from the CIFAR Deep Learning summer schools in Montreal (available on provide a good overview of recent research topics. I’d recommend going over those and pick the research topic that is most inspiring or interesting to you.

Larochelle offers an online deep learning and neural network course which is free and accessible on Youtube. There’s plenty of time to study his videos before his talk with us on 14–15 October! With The Best is proud to have him as a speaker for AI WTB.

A single golf clap? Or a long standing ovation?

By clapping more or less, you can signal to us which stories really stand out.