Diving Into Deep Dream using Tensorflow | Towards AI
Whenever any person hears about Deep Learning or Neural Network the things which first come into their mind are that it’s used for Object Detection, Face Recognition, Natural Language Processing, and Speech Recognition.
But Neural Network is also capable of generating images. And one of the state-of-the-art methods is called Deep Dream.
What is it?
Deep Dream is a computer vision program created by Google engineer Alex Mordvintsev which uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a Dream-like hallucinogenic appearance in the deliberately over-processed images.
Some images which is generated using Deep Dream
How does it work?
In simple terms, many levels of neural networks process the images input into the program. The artificial neurons are calculated and the weight of their sum processed through the roughly three-layered network: low, intermediate, and high-level layers. The lower levels are responsible for more basic edges, corners, and textures. By maximizing those levels, the picture would end up looking more like a Van Gogh. The higher levels are responsible for more detailed, hierarchical input like buildings and other elaborate objects. When the higher levels are maximized, the picture looks more like a jumbled Dali.
Let us create our first simple Deep Dream.
In this tutorial, we’re going to use Tensorflow 2.0 and we run it on Google Colab.
In the following 6 steps, we’re going to build our first deep dream model.
So let’s get started.
1. Importing all dependencies
Here we’re going to use Indian actress Deepika Padukone image and then preproccess it.
2. Prepare the feature extraction model
Download and prepare a pre-trained image classification model. You will use InceptionV3 which is similar to the model originally used in DeepDream.
The InceptionV3 architecture is quite large (for a graph of the model architecture see TensorFlow’s research repo). For DeepDream, the layers of interest are those where the convolutions are concatenated. There are 11 of these layers in InceptionV3, named ‘mixed0’ though ‘mixed10’. Using different layers will result in different dream-like images. Deeper layers respond to higher-level features (such as eyes and faces), while earlier layers respond to simpler features (such as edges, shapes, and textures). Feel free to experiment with the layers selected below, but keep in mind that deeper layers (those with a higher index) will take longer to train on since the gradient computation is deeper.
3. Calculate loss
The loss is the sum of the activations in the chosen layers. The loss is normalized at each layer so the contribution from larger layers does not outweigh smaller layers.
4. Gradient ascent
Once you have calculated the loss for the chosen layers, all that is left is to calculate the gradients with respect to the image and add them to the original image.
Adding the gradients to the image enhances the patterns seen by the network. At each step, you will have created an image that increasingly excites the activations of certain layers in the network.
5. Taking it up an octave
Pretty good, but there are a few issues with this first attempt:
(a) The output is noisy (this could be addressed with a tf.image.total_variation loss).
(b)The image is low resolution.
(c)The patterns appear like they’re all happening at the same granularity.
To overcome these issues we can perform the previous gradient ascent approach, then increase the size of the image (which is referred to as an octave), and repeat this process for multiple octaves.
Hurray, we’ve just generated an image using Deep Dream.
In, case you don’t like to code the Deep Dream algorithm manually but want to create images with Deep Dream then here is the solution.
You can use DeepDreamGenerator.
Deep Dream Generator
The technique is a much more advanced version of the original Deep Dream approach. It is capable of using its own…
Note:- In the above example some lines of codes are not showing because in Medium the first 11 lines of GitHub gist are only displayed. So I strongly suggest you to download colab notebook(.ipnyb file) from my GitHub repo.
Link of my Deep_Dream repo:-
Do you know what, you can hit the clap button 50 times in medium?
If you like this blog, show some love by doing claps.