Variational Autoencoder with Pytorch

Eugenia Anello
DataSeries
Published in
5 min readJul 15, 2021

--

Illustration by Author

The post is the ninth in a series of guides to building deep learning models with Pytorch. Below, there is the full series:

  1. Pytorch Tutorial for Beginners
  2. Manipulating Pytorch Datasets
  3. Understand Tensor Dimensions in DL models
  4. CNN & Feature visualizations
  5. Hyperparameter tuning with Optuna
  6. K Fold Cross Validation
  7. Convolutional Autoencoder
  8. Denoising Autoencoder
  9. Variational Autoencoder (this post)

The goal of the series is to make Pytorch more intuitive and accessible as possible through examples of implementations. There are many tutorials on the Internet to use Pytorch to build many types of challenging models, but it can also be confusing at the same time because there are always slight differences when you pass from one tutorial to another. In this series, I want to start from the simplest topics to the more advanced ones.

Variational autoencoder

The standard autoencoder can have an issue, constituted by the fact that the latent space can be irregular [1]. This means that close points in the latent space can produce different and meaningless patterns over visible units.

--

--

Eugenia Anello
DataSeries

Data Scientist | Top 1500 Writer on Medium | Love to share Data Science articles| https://www.linkedin.com/in/eugenia-anello