Deep Dream
For a more technical read on the implementation of the deep dream, please click the Github page.
As part of a third mini project, I explored convnet in deep dream, where I can twist a picture based on its pattern. The convnet I used is pre-trained on ImageNet. Because Inception is able to produce beautiful deep dreams, I used an inception model.
The deep dream algorithm is illustrated by the figure below, which is from Chapter 8, Section 2 of Deep Learning with Python. The method is to process images at a list of “scales” and run gradient ascent to maximize the loss at this scale. With each successive scale, I upscale the image by 40%. In order to avoid the lose of image detail when upscaling image from small to large, I also inserted the lost details back by using the larger original image.
The image I used is a beach image displayed below:
The image after I utilized deep dream with CNN looks like this:
As readers can see from the image, the original beach image are covered by patterns after we applied inception model. Since this project does not include training and predicting, we excluded all training-specific operations and uses pre-trained weight from ImageNet. The resulting image also has various scales. The smaller the scale, the more fuzzy the image.