A Primer on Semi-Supervised Learning — Part 2

Neeraj Varshney
Analytics Vidhya
Published in
5 min readJun 27, 2020

--

Semi-Supervised Learning (SSL) is an interesting approach towards addressing the lack of labeled data issue in Machine Learning. SSL utilizes unlabeled data along with a labeled dataset to learn a task. The objective of SSL is to perform better than a supervised learning technique trained using labeled data alone. This is Part 2 of the article series on Semi-Supervised Learning and covers a few initial SSL techniques in detail. Part 3 will cover recent SSL approaches and will be published soon. Part 1 gives an introduction to Semi-Supervised Learning and can be found here.

Photo by Franck V. on Unsplash

Outline Part 2

  1. Consistency Regularization, Entropy Minimization, and Pseudo Labeling
  2. Examples of realistic perturbations in Vision and Language
  3. Approaches for Semi-Supervised Learning
    — Π model
    — Temporal Ensembling
  4. References

Part 3 will cover more recent SSL techniques like Mean Teacher, Unsupervised Data Augmentation, Noisy Student, etc. and will be published soon.

Part 1 of this series is available here

Consistency Regularization, Entropy Minimization, and Pseudo Labeling:

--

--

Neeraj Varshney
Analytics Vidhya

Looking for full-time positions | Ph.D. Candidate working in Natural Language Processing (https://nrjvarshney.github.io)