Autoencoders — what are they good for?

Synaptech
Synaptech
Published in
3 min readApr 19, 2017

Hey there, AI enthusiasts! As you probably know already, we take each word from the glossary and define it. The definitions are written in such ways that those who are not software developers, but are interested in AI and Machine Learning, can understand a bit of what’s going out there. Our previous article was about Artificial Superintelligence which can be found here, and today’s word is Autoencoders.

Have you ever noticed that when you are beginning to learn something new, be it a new language or how to play an instrument, after you have practiced enough it becomes something similar like a reflex? It’s like your muscles have memorized every move you want to make! Well, this is not what autoenconding is, but it works on the same principles.

What is an Autoencoder?

The autoencoder is a neural network that tries the best way it can to reconstruct its input. More exactly, if you feed the autoencoder the vector (1,0,1,0) it will try to replicate it identically (1,0,1,0). But how does it work?

The answer is simple: hidden layers.

Photo via researchgate.net

In this picture, the information runs from left to right. The black squares take input from the programmer, and tries to encode it in a convenient form. This happens with the help numerical data from a computer or artificial neural network. Now, the encoded information travels to the second layer of neurons, the circles. The latter, send it to the effector neurons (not presented in the photo) which take the information and deliver them to the computer terminals, or the output layer.

How does it look?

Because the hidden layers do not have contact with the outside world, they only act with other neurons, the final effect is a bit different than expected. For example:

Photo by Terrybroad

Here is an autoencoded image from the famous sci-fi movie Blade Runner. As you can see, the pictures are blurrier and paler. You can read more about this project, here.

Types of Autoencoders

  1. Denoising autoencoder is an extension of the basic one, but a randomizing version. They attempt to address identify-function risk by randomly introducing noise that the autoencoder must then reconstruct or denoise — hence the name.
  2. Sparse autoencoder is that type of encoder that only activates one kind of input in the hidden layers, not all of them.
  3. Variational autoencoder (VAE) is doing its own auteoncoding to reduce and simplify data, and it doesn’t need to be supervised.
  4. Contractive autoencoder (CAE) only learns given transformations that are in the dataset, and avoids any other of them. For instance, if your input is side views of a house, CAE will try to output that. But, if after the training, your input is a frontal picture of a house, CAE will be sensitive to those changes, but will still try to reconstruct the side views in the output.

If you have more information you want to share about autencoders, please don’t forget to leave it in the comments. The next article in the series will be about Computer Vision. Meanwhile, Synaptech, the AI-based event is growing, presenting new speakers, and if you’d like to find out more about the event — the competition, workshops, conference or our new articles, you can subscribe to our newsletter here.

--

--

Synaptech
Synaptech

Artificial Intelligence event based in Berlin focused on the practical aspects. Machine Learning workshops, #AI Conference & International Startup Competition