Before starting let’s get some background on ** Estimators, **they're classified into two classes

*Parametric**Non-*P*arametric*

**Parametric **make assumptions about the population from which a sample of data is drawn. Often this assumption is that the population is normally distributed, i.e. bell-shaped. This assumption allows the development of a theory that allows us to draw inferences about the population based on a sample taken from it.

The other family of estimators is **Non-Parametric **this set of distribution makes no distributional assumptions no fixed structure and depends upon all the data points to reach an estimate. **…**

The real-world data is not always distributed the way we want, that is **Normal-Distribution **It is always distributed in some distribution which we have no idea about some time it is skewed towards the right other time it has a long tail this leads us to miss normal distribution, Why we miss normal distribution you ask?

The normal distribution is the most important probability distribution in statistics because it fits many natural phenomena. …

Autoencoders falls under the class of unsupervised learning where the Function tries to mimic itself with some constraints such as pushing the input towards the bottleneck such that it just learns enough significant features of the input data to reconstruct it back with minimal loss

Audience

People who are new to the space of deep learning

Prerequisite

Basic understanding of convolutional neural networks

Auto-encoders have three components

First the encoding unit

Second the latent space

Third the decoder unit

In the encoder part, the image is loosing its free dimensions and tries to learn a significant part of the underlying data. …