Open in app

Sign In

Write

Sign In

Aakash Nain
Aakash Nain

1.6K Followers

Home

About

Aug 6, 2020

Unsupervised Learning of Visual Features by Contrasting Cluster Assignments

Self-supervised learning, semi-supervised learning, pretraining, self-training, robust representations, etc. are some of the hottest terms right now in the field of Computer Vision and Deep Learning. The recent progress in terms of self-supervised learning is astounding. …

Deep Learning

9 min read

Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
Deep Learning

9 min read


Jun 28, 2020

Rethinking Pre-training and Self-training

In late 2018, researchers at FAIR published a paper Rethinking ImageNet Pre-training which was subsequently presented in ICCV2019. The paper presented some very interesting results regarding pre-training. I didn’t write a post about it then, but we had a long discussion on it in our KaggleNoobs slack. Researchers at Google…

Machine Learning

10 min read

Rethinking Pre-training and Self-training
Rethinking Pre-training and Self-training
Machine Learning

10 min read


Apr 4, 2020

Meta Pseudo Labels

Have you heard of meta-learning? Do you remember the time when you used pseudo labeling for a Kaggle competition? What if we combine the two techniques? Continuing the series of posts regarding semi-supervised learning, today, we will discuss the latest research paper that aims to combine meta-learning and pseudo labeling…

Machine Learning

9 min read

Meta Pseudo Labels
Meta Pseudo Labels
Machine Learning

9 min read


Mar 6, 2020

SimCLR: Contrastive Learning of Visual Representations

Semi-supervised learning is finally getting all the attention it deserves. From vision-based tasks to Language Modeling, self-supervised learning has paved a new way of learning (much) better representations. This paper, SimCLR, presents a new framework for contrastive learning of visual representations. Contrastive Learning Before getting into the details of SimCLR, let’s take…

6 min read

SimCLR: Contrastive Learning of Visual Representations
SimCLR: Contrastive Learning of Visual Representations

6 min read


Dec 20, 2019

What we learned from KaggleNoobs!

2019 is coming to an end. The landscape of Data Science and Machine Learning has made great progress. From an overwhelming amount of papers to focus on reproducibility and interpretability, this has been an incredible year overall. But today, I am not going to talk about another research paper or…

Machine Learning

7 min read

What we learned from KaggleNoobs!
What we learned from KaggleNoobs!
Machine Learning

7 min read


Nov 28, 2019

EfficientDet: Scalable and Efficient Object Detection

Object Detection has come a long way. From trivial computer vision techniques for object detection to advanced object detectors, the improvements have been amazing. Convolutional Neural Networks (CNNs) have played a huge role in this revolution. We want our detector to be as accurate as possible as well as fast…

Machine Learning

12 min read

EfficientDet: Scalable and Efficient Object Detection
EfficientDet: Scalable and Efficient Object Detection
Machine Learning

12 min read


Nov 16, 2019

Self-training with Noisy Student

2019 has been the year where a lot of research has been focused on designing efficient deep learning models, self-supervised learning, learning with a limited amount of data, new pruning strategies, etc. …

Machine Learning

7 min read

Self-training with Noisy Student
Self-training with Noisy Student
Machine Learning

7 min read


Nov 7, 2019

Gate Decorator: Global Filter Pruning

In recent years, we have witnessed the remarkable achievements of CNNs. Iterative improvements for a task require bigger models and more computation. However, bigger models with huge memory footprints and computation requirements prevent the deployment of these models on mobiles and edge devices. …

Machine Learning

9 min read

Gate Decorator: Global Filter Pruning
Gate Decorator: Global Filter Pruning
Machine Learning

9 min read


Jul 25, 2019

When Does Label Smoothing Help?

In late 2015, a team at Google came up with a paper “Rethinking the Inception Architecture for Computer Vision” where they introduced a new technique for robust modelling. This technique was termed as “Label Smoothing”. Since then this technique has been used in many state-of-the art-models including image classification, language…

Machine Learning

6 min read

When Does Label Smoothing Help?
When Does Label Smoothing Help?
Machine Learning

6 min read


Jun 7, 2019

EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

Since AlexNet won the 2012 ImageNet competition, CNNs (short for Convolutional Neural Networks) have become the de facto algorithms for a wide variety of tasks in deep learning, especially for computer vision. From 2012 to date, researchers have been experimenting and trying to come up with better and better architectures…

Machine Learning

7 min read

EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Machine Learning

7 min read

Aakash Nain

Aakash Nain

1.6K Followers

Research Engineer, Machine Learning

Following
  • Shir Meir Lador

    Shir Meir Lador

  • dj patil

    dj patil

  • Yaroslav Bulatov

    Yaroslav Bulatov

  • David Silver

    David Silver

  • Guido van Rossum

    Guido van Rossum

See all (45)

Help

Status

Writers

Blog

Careers

Privacy

Terms

About

Text to speech

Teams