Before starting let’s get some background on Estimators, they're classified into two classes

  1. Parametric
  2. Non-Parametric

Parametric make assumptions about the population from which a sample of data is drawn. Often this assumption is that the population is normally distributed, i.e. bell-shaped. This assumption allows the development of a theory that allows us to draw inferences about the population based on a sample taken from it.

The other family of estimators is Non-Parametric this set of distribution makes no distributional assumptions no fixed structure and depends upon all the data points to reach an estimate.

The real-world data is not always distributed the way we want, that is Normal-Distribution It is always distributed in some distribution which we have no idea about some time it is skewed towards the right other time it has a long tail this leads us to miss normal distribution, Why we miss normal distribution you ask?

The normal distribution is the most important probability distribution in statistics because it fits many natural phenomena. …

Autoencoders falls under the class of unsupervised learning where the Function tries to mimic itself with some constraints such as pushing the input towards the bottleneck such that it just learns enough significant features of the input data to reconstruct it back with minimal loss


People who are new to the space of deep learning


Basic understanding of convolutional neural networks

Auto-encoders have three components

First the encoding unit

Second the latent space

Third the decoder unit

In the encoder part, the image is loosing its free dimensions and tries to learn a significant part of the underlying data. …

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store