Few Shot Learning - A Case Study (1)

Maitreya Patel
Analytics Vidhya
Published in
3 min readJun 4, 2020

Not long ago, ML research solely focused on data specific objectives. For example, if Bob wants to build a classifier that can detect cat or dog, then he will collect various photos of both the classes and will classifier to get good accuracy on this collected dataset. However, when such classifiers are evaluated(or deployed online) on real and unseen images, then it’s performance depends upon the initially collected data. Hence, it’s results degrades. Although transfer learning and pre-training based approaches make it easier to train, still, it all depends upon the training dataset. Moreover, if one wants to add another class (suppose it’s Tiger), he will have to train classifiers again to adapt to the new class. One more thing to note is that collecting the various data is one of the most tedious tasks.

Therefore, we need to think differently. And hence, recently, researchers started working on Few Shot Learning-based approaches to tack below problems:

  1. Lack of training data
  2. Support for unseen tasks
Source: https://www.abe.ai/blog/artificial-intelligence-data-science/data-science-vs-ai_page_28/

What is Few Shot Learning?

In simple terms, few shot learning is the ability to learn for the given tasks from provided limited data. Like, children require only a few images to recognize anything from butterfly to elephant.

Therefore, few shot learning is like a new sub-field of ML research to increase the capability of deep learning models in terms of generalization.

What are the applications of it?

There are lots of applications of a few-shot learning. And there are lots of problems where few shot learning can’t be applied as of now. Currently, many researchers are working on different classification and conversion problems. Let’s check out a few examples of such research problems in various areas of Machine Learning.

First is Computer Vision:

  1. Image classification : Classifying unseen images from out of training set target classes
  2. Face recognition : Extracting underlying features of each person without training
  3. Image-to-Image Conversion : Transferring style of unknown image to seen/unseen content image

Second is Speech Technology:

  1. Voice Conversion : Changing speaking style of any source person to look like as seen/unseen target speaker is speaking
  2. Sound events classification : Classifying unobserved sound event as there are quite a lot of possible sound events
  3. Emotion Classification : Ability to classify unobserved emotion in speech

Third and last one is Natural Language Processing:

  1. Text classification : Ability to classifying unobserved text classes (which can be defined by users) as understanding text classes from few sentences is pretty tough

In which scenarios zero-shot learning is possible?

First, zero-shot learning is the ability to perform a particular task without analyzing source/target specific data. Although it sounds too exciting, it has minimal applications — one of the most popular research problems where zero-shot learning is Voice Conversion, Image-to-Image translation, and Face Recognition.

As few shot learning is one of the most important research topics, I will focus on implementing different methodologies in the next few weeks. Furthermore, I will be conducting extensive analysis to measure the effectiveness of different already proposed methods on the following scenarios in terms of accuracy and computational complexity:

  1. Modifying the architectures
  2. Implementing on different datasets from images to audios

I will be publishing my findings and tutorials every week from now onward. So stay tuned! Join my monthly newsletter here to get updated with current research trends and my blogs.

--

--

Maitreya Patel
Analytics Vidhya

Research Enthusiastic (Author @ ICASSP, EUSIPCO) | Deep Learning | Computer Vision | Speech Technology