The recent release of deepdream has caused great excitement among machine learning researchers and artists alike. Deepdream is a image recognition neural network, turned upside down. It hallucinates images. I decided to take the system for a test run and explore. The following video is generated entirely with deepdream.
Calista & The Crashroots: Deepdream
Keep in mind, this video was released just 10 days after the deepdream code was published online. It is an exploration of generative systems, not “Art” ;-)
Buy song here: https://calistacrashroots.bandcamp.com/releases
Credits
- Vocals: Calista Kazuko
- Lyrics: Jack Brown & Calista Kazuko
- Music: Samim Winiger & Miguel Toro
- Horns: Ben Abarbanel Wolff
- Video: Samim.io
Process
Working with generative systems is fascinating. Inputs, Parameters, Outputs and Outcomes all have to be selected carefully. The artistic process is transformed into that of an information curator — maximising for serendipitous, emergent behaviour. Once the desired outputs are achieved, an endless amount of new, similar images can be generated.
Technology
- All content was rendered at 720p on two Amazon EC2 G2.8xlarge.
- The DeepDreamAnimator was used for animation & tuning.
- A selection of used parameters is published on Github. This makes the video easily reproducible. A novel thing for music videos.
Final thoughts
We have barley scratched the surface of deepdream and similar technologies. Exiting new approaches like using optical flow, guided dreaming and custom models are emerging quickly. The future of generative content creation is looking very interesting. When A.I. Researchers and Artists collaborate, magic happens. With that, I wish you sweet dreams.
Get in touch here: https://twitter.com/samim | http://samim.io