Deep Learning Autoencoders
Interested in knowing how retailers like Amazon gives you recommendations. Like customer who bought this item also bought or how Netflix recommends movies, then read on…
All this can be achieved using unsupervised deep learning algorithm called Autoencoder.
It always helps to relate a complex concept with something known for easy understanding. Let’s try to relate Autoencoders to something we know.
What is Autoencoder?
We buy a service or an item on the internet. we ensure that the site is secure by checking they use https protocol. We enter our credit card details for the purchase. Our credit card details are encoded over the network using some encoding algorithm. Encoded credit card detail is decoded to generate the original credit card number for validation.
In our credit card example, we took the credit card details, encoded it using some function. Later decoded it using another function to reproduce the output identical to the input. This is how autoencoders work.
Autoencoders encodes the input values x, using a function f. It then decodes the encoded values f(x), using a function g, to create output values identical to the input values.
Autoencoder ‘s objective is to minimize reconstruction error between the input and output. This helps autoencoders to learn important features present in the data. When a representation allows a good reconstruction of its input, then it has retained much of the information present in the input.
How does Autoencoders work?
We take the input, encode it to identify latent feature representation. Decode the latent feature representation to recreate the input. We calculate the loss by comparing the input and output. To reduce the reconstruction error we back propagate and update the weights. Weight is updated based on how much they are responsible for the error.
Let’s break it down step by step.
In our example, we have taken the dataset for products bought by customers
Step 1: Take the first row from the customer data for all products bought in an array as the input. 1 represent that the customer bought the product. 0 represents that the customer did not buy the product.
Step 2: Encode the input into another vector h. h is a lower dimension vector than than the input. We can use sigmoid activation function for h as the it ranges from 0 to 1. W is the weight applied to the input and b is the bias term.
Step 3: Decode the vector h to recreate the input. Output will be of same dimension as the input
Step 4 : Calculate the reconstruction error L. Reconstruction error is the difference between the input and output vector. Our goal is to minimize the reconstruction error so that output is similar to the input vector
Reconstruction error= input vector — output vector
Step 5: Back propagate the error from output layer to the input layer to update the weights. Weights are updated based on how much they were responsible for the error.
Learning rate decides by how much we update the weights.
Step 6: Repeat step 1 through 5 for each of the observation in the dataset. Weights are updated after each observation(Stochastic Gradient descent)
Step 7: Repeat more epochs. Epoch is when all the rows in the dataset has passed through the neural network.
- Unsupervised deep machine learning algorithm. Autoencoders don’t use any labelled data.
- Directed neural network
- Learns a lower dimension representation of the input feature
Where are Auto encoders used ?
- Used for Non Linear Dimensionality Reduction. Encodes input in the hidden layer to a smaller dimension compared to the input dimension. Hidden layer is later decoded as output. Output layer has the same dimension as input. Autoencoder reduces dimensionality of linear and nonlinear data hence it is more powerful than PCA.
- Used in Recommendation Engines. This uses deep encoders to understand user preferences to recommend movies, books or items
- Used for Feature Extraction : Autoencoders tries to minimize the reconstruction error. In the process to reduce the error, it learns some of important features present in the input. It reconstructs the input from the encoded state present in the hidden layer. Encoding generates a new set of features which is a combination of the original features. Encoding in autoencoders helps to identify the latent features presents in the input data.
- Image recognition : Stacked autoencoder are used for image recognition. We can use multiple encoders stacked together helps to learn different features of an image.
Read different types of Autoencoders here
Deep learning by Ian Goodfellow and Yoshua Bengio and Aaron Courville
Share it and Clap if you liked the article!
Originally published on mc.ai on December 2, 2018.
Related Posts from DDI:
Self-driving cars, Alexa, medical imaging - gadgets are getting super smart around us with the help of deep learning…www.datadriveninvestor.com