TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

Member-only story

How Number of Hidden Layers Affects the Quality of Autoencoder Latent Representation

6 min readAug 23, 2022

--

Photo by Clark Van Der Beken on Unsplash

Introduction

You may already know that autoencoder latent representation includes the most important features of the input data when its dimension is significantly lower than the dimension of the input data.

The quality of the autoencoder latent representation depends on so many factors such as number of hidden layers, number of nodes in each layer, dimension of the latent vector, type of activation function in hidden layers, type of optimizer, learning rate, number of epochs, batch size, etc. Technically, these factors are called autoencoder model hyperparameters.

Obtaining the best values for these hyperparameters is called hyperparameter tuning. There are different hyperparameter tuning techniques available in machine learning. One simple technique is manually tuning one hyperparameter (here, number of hidden layers) while keeping other hyperparameter values unchanged.

Today, in this special episode, I will show you how the number of hidden layers affects the quality of autoencoder latent representation.

The dataset we use

--

--

TDS Archive
TDS Archive

Published in TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

Rukshan Pramoditha
Rukshan Pramoditha

Written by Rukshan Pramoditha

3,000,000+ Views | BSc in Stats (University of Colombo, Sri Lanka) | Top 50 Data Science, AI/ML Technical Writer on Medium

No responses yet