Member-only story
How Number of Hidden Layers Affects the Quality of Autoencoder Latent Representation
Hyperparameter tuning in autoencoders — Part 1
Introduction
You may already know that autoencoder latent representation includes the most important features of the input data when its dimension is significantly lower than the dimension of the input data.
The quality of the autoencoder latent representation depends on so many factors such as number of hidden layers, number of nodes in each layer, dimension of the latent vector, type of activation function in hidden layers, type of optimizer, learning rate, number of epochs, batch size, etc. Technically, these factors are called autoencoder model hyperparameters.
Obtaining the best values for these hyperparameters is called hyperparameter tuning. There are different hyperparameter tuning techniques available in machine learning. One simple technique is manually tuning one hyperparameter (here, number of hidden layers) while keeping other hyperparameter values unchanged.
Today, in this special episode, I will show you how the number of hidden layers affects the quality of autoencoder latent representation.