The Dream of Neural Networks

Unsupervised Blog
Balabit Unsupervised
5 min readSep 1, 2015

Our mind invents crazy stories while we are sleeping. Virtually everything can happen in dreams from usual and boring to improbable and ridiculous stuff. (Once in a dream I tore the fabric of reality and saw the program codes controlling the world.) Sometimes we can figure out why a particular thing appeared in our dream because it has a connection with something we remember seeing earlier. However, sometimes a part of our dream just appears without any reason we can understand. Dreams rely on these associations; let them be close or unthinkable.

This might be the reason behind the name Deep Dream, which is the topic of today’s post. Deep Dream, from Google, is a fascinating tool for turning images into weird, dream-like ones in a special way. The results can be hilarious or disturbing, psychedelic or bizarre. The amount of eyes and dogs appearing is usually frightening. Some collections can be found here or here or one can just search for #deepdream on Twitter.

Past and future meet in this photo of our beloved Budapest, reminding us of the history of cafés and visioning a green city powered by wind turbines

Let us briefly cover how Deep Dream works. It belongs to a rising field of machine learning called deep learning. Deep learning is getting popular because of its astonishing ability to learn, especially in the domains of image, video, audio and natural language processing. With this ability, machines can recognize objects and people in photos — e.g., you can upload any kind of photos to Clarifai’s smart object recognizer and it will describe with some words what it’s about. We can also think of the recent impressive demo of Hound’s speech recognizing capabilities to get a feeling of what deep learning can achieve. At the core of Deep Dream there is a deep neural network (DNN). A DNN is a tool of machine learning inspired by the biological brain. A human brain consists of billions of neurons that are organized in a special structure. They are responsible for our thinking and learning. The key to learn is in the complex connections in the structure that define how the activation of a neuron spreads to other neurons it is connected with.

In an artificial neural network, such as a DNN, the neurons are also organized into layers and the layers are interconnected as we see in the picture above. The neurons in the input layer start processing the input, like the image you feed into Deep Dream. They are either activated seeing the picture or not, and this information spreads to the next layer. Eventually, the neurons of the output layer can make a decision about the input; e.g., in an object recognition task whether the input image contains a car or not. The great advantage of using DNNs over standard image classification methods is that a DNN can learn what to look for in an image when deciding if it is about a car. (In a standard method, we would have to specifically tell it, “Pay attention to black and round objects!”)

Deep Dream utilizes a deep neural network that has been previously trained on a large set of images. Having seen many pairs of images and their descriptions, it is now able to tell what appears in a picture. However, the engineers at Google have made a twist. After recognizing some concepts and objects in the image, Deep Dream also transforms the image to make it look more like the objects it identified. It then feeds the altered image as an input of the network, starting the recognition-transformation process again.

So, if in a photo something looked like a bird according to Deep Dream, it changes small details in that thing to make it more bird-like. Deep Dream, of course, uses a loose definition of “looking alike.” Thus, even if your nose is tiny and pretty, Deep Dream can make an unexpected association — just like your mind does in your dream -, declaring your face bird-like and modifying your nose to be more similar to a beak of a bird. When the process starts again, your similarity to a bird would be higher; thus, you would be more likely recognized as a bird and then modified again. You will be probably doomed to gradually change into a bird in a couple of iterations!

Now we can understand why the typical Deep Dream image contains eyes, dogs and weird animals. Remember that the DNN has to be trained on a large database of images? It turns out that the DNN that most people use has been trained on a specific data set, ImageNet, that had tons of photos about animals, espacially about dogs. Thus, the DNN has been specialized in recognizing dogs very well and it now sees dogs everywhere. If all you have is a hammer, everything looks like a nail.

You might want to learn how to produce Deep Dream images for yourself. Luckily, Google has made their code open. Many sites offering DAAS (Dreaming as a Service) have opened where everyone can upload their photo and take the result of the dreaming process, such as dreamscopeapp.com, deepdream.in, or deepdreamit.com.

If you happen to know a bit about Python, you can try running Deep Dream on your computer with the IPython notebook released by Google. To run the code, some Python packages and the Caffe deep learning framework are needed. If you do not want to bother yourself with setting up the appropriate environment, you can use this virtual environment here already configured for you by a member of the community.

If you play with Deep Dream for yourself with the notebook, you will have the opportunity to try and use other DNNs trained on other data sets. Such models can be found, e.g., in the Model Zoo. This is particularly valuable if you are already sick of dogs. For example, the following image — and the one about Budapest as well — has been formed by a model trained on the MIT Places image set which is designed for training models for recognizing scenes and buldings.

Yes, we all do have some organs developed for sucking blood

We also had shocking revelations while browsing our photos manipulated by Deep Dream, such as the following one.

It is clear for us now that, unfortunately, our company has been corrupted by our enemies. The two androids from the future, who are standing in the left on their apparently robotic legs, have infiltrated into the team, trying to sabotage our research in robot detection from system log data. They have already been disposed of, and the human/dog hybrid employees of Balabit can now happily continue their work. This small use-case has made it clear for us that we can easily apply Deep Dream for anomaly detection!

Originally published at www.balabit.com on September 1, 2015 by Árpád Fülöp.

--

--