#deepdream is visual autotune
Google wants to build an internal product that allows them to classify an image that you upload. They want to know if that photo contains a new baby, or a flat tire. This information is valuable to advertisers.
A side effect of their method is that you can run their classifier in reverse… kinda.
You can feed in random pixel data as source image, but instead of asking the ANN what the image contains, you let the ANN progressively refine the image based on a constraint you impose.
The above image was what the network created when the constraint was ‘banana’. Very cool!
Surprising things happen sometimes like in the above photo where they tried to search for dumb-bells ( a type of exercise weight) and got the arm that is usually attached to such equipment.
Fun right? Let’s crank it up to 11!
You don’t have to feed in random pixels that look like static. You also don’t have to impose a constraint on the refinement. You can let the algorithm decide what it thinks it sees.
As the process happens over and over again the features found are exaggerated in each new generation of the image.
Why am I complaining about this?
Because my twitter feed is full of an endless stream of slugs with dog faces.
People are treating the software like an intelligent photoshop filter, then posting the image to the internet because it looks trippy. This has to stop.
It reminds me of a similar technique for producing trippy visuals.
Tweak some initial parameters, and go on a fractal adventure.
I’m annoyed that I have been driven to unfollow artists I like because they cannot stop tweeting images made with #deepdream.
What Google Research built is amazing. It gives us great insight into what it means for a computer to perceive something visually. When you treat this like a photoshop filter you get a bunch of annoying crap and the entire point is missed.