Dreaming Differently

How a modified version of Google’s DeepDream software elevates immersive panoramas into multilevel mind-body experiences

ABSTRACT:

“Dreamscapes” combine computational photography and artificial intelligence in a way that is both unique and original. Each Dreamscape begins with a self-developed methodology called “XYZ photography,” which blends and stitches together a cubic array of high resolution photographs. The resulting immersive and vibrant panoramas are then transformed by applying a version of Google’s DeepDream software modified to operate successfully on giant images. The final artworks are executed as backlit displays that deliver multilevel experiences contrasting photographic reality with digital fantasy.

CONCEPTION:

In July 2015, Google released an open source software package called “DeepDream,” which quickly became a viral sensation. When applied to images, photographic or otherwise, this artificial intelligence (AI) program imbued the originals with complex, even hallucinatory, patterns and textures. While a few people were able to generate intriguing results, most used the software as a novelty turning their snapshots into psychedelic nightmares. Nonetheless, I sensed an opportunity to use the software with more subtlety in hopes of bringing a sophisticated expressiveness to the giant landscape images I had been creating over the previous four years. I produced a series of low-resolution tests and achieved promising results that were well received (see album). These were shared internally at Google to much acclaim; one individual on the DeepDream team remarked, “I love how this is being used as an artistic tool beyond just a weird curiosity.” However, DeepDream as released was simply not designed to operate on multi-hundred megapixel images such as mine; it would simply crash. For the time being, I had reached an impasse.

ENGINEERING:

Fortunately, a few weeks later I was able to enlist two brilliant engineers, Joseph Smarr (Google) and Chris Lamb (NVIDIA), to try modifying the DeepDream source code for my purposes. It took them over four months of sporadic effort on nights and weekends to achieve “liftoff” with one of my full resolution scenes (see video of Joseph and Chris explaining the tech behind the art). In late January 2016, they handed off their modified DeepDream code to me and my real work began. I am the only person with access to this code and, as far as I know, no one else has undertaken a similar engineering effort and/or is “dreaming” on images at this scale.

INTENT:

As an avid hiker, I’ve greatly enjoyed and am drawn to “special places,” unique locations in our world of such breathtaking beauty and grandeur that the scene goes beyond a mere visual perception and becomes a visceral experience. When that happens to me, I invariably find myself waxing philosophical, pondering the truth behind what I’m seeing and questioning reality itself. For me at least, traditional photography has failed to capture my experience in a way that enables me to share with others this powerful connection that is quickly forged between eyes, body, and mind. Being an analytical and tenacious person equipped with a strong background in design, art history, and computer graphics, I refused to give up trying to “deliver” this personal experience. A little over five years ago I had my first of two key breakthroughs in this quest; namely, the development of my “XYZ photography” method, a blending of panoramic and high dynamic range techniques that results in unusually high resolution, immersive, and vibrant images (see Grand Format Collection). This first breakthrough, I believe, took me two-thirds of the way along my quest. Judging from the most common response, “It feels like I can step right into this scene,” these images appear to reach people both visually and viscerally. This year, the DeepDream code modified by Joseph and Chris provided my second breakthrough and completed my quest; by transforming every square inch of my giant landscape images with wholly unexpected form and content that is only revealed upon close-up viewing, I’ve found a powerful way to reach people cognitively as well. This combination of computational photography (my XYZ method) and artificial intelligence (my modified DeepDream code) placed in service of this specific artistic intent is both unique and original (see Dreamscapes Gallery).

PROCESS:

DeepDream, especially as modified by my engineering team, is incredibly powerful software with an enormous range of options from which to choose the desired dreaming style and characteristics. Climbing the learning curve and generating quality results can be an intimidating prospect. Happily, the ingenious computing configuration set up by Joseph and Chris enabled me to climb that learning curve faster and more deeply than probably anyone else. Key to this was the decision they made to host their software on a monster cloud-based compute server utilizing four separate graphics processing units (GPU) — a supercomputer in the sky, if you will. One benefit of this approach is that it enables me to run four different experiments at a time, one on each GPU. This made it fairly quick and straightforward for me to exhaustively catalog the “macro” style of all 84 layers of DeepDream’s neural network, and to fully understand the effects of tweaking the four parameter settings that can be applied to each of these styles (see DeepDream Parameter Studies). Another benefit is that I can (and indeed have) run my experiments whenever and wherever inspiration strikes, provided I can find an Internet connection. This is precisely what enabled me to create my first indoor Dreamscape, the Guadalajara Cathedral, while I was exhibiting my work in Mexico last October.

EXECUTION:

The unique setup and approach I’ve taken to using DeepDream has resulted in compelling large format illuminated works of great impact (see Printed and Published Works) and stunning detail (see this video designed to illustrate the extreme resolution and zooming power inherent in these images). I’ve witnessed an amazing degree of crossover appeal, from adults to children, from security guards to CEOs, and from laypeople to sophisticated art curators (read curatorial assessment by Milagros Bello, Ph.D.) — everyone seems to be captivated and fascinated by my Dreamscapes. I believe part of this appeal is due to the decision I made to rely on LED-backlit tension fabric structures to project my artistic intent, and the research I’ve undertaken to find the highest quality supplier. I haven’t seen any other DeepDream artists utilizing this medium, and if they choose to, they will learn that the providers of these systems are not all equal.

EVOLUTION:

Always open to learning from others, I’ve benefited greatly from the advice, wisdom, and encouragement of Dr. Milagros Bello, the chief curator and director of Curator’s Voice Art Projects in the Wynwood Arts District of Miami, the first major gallery to represent me (see video conversation between artist and curator). It was Dr. Bello who suggested developing a mid-format collection based on carefully curated 12” x 12” details of my large format 8’ high scenes upscaled to 40” x 40” and then re-dreamed at that size with dreaming features scaled to relatively match those exhibited in the original 12” square. At first I objected for two reasons: 1) I felt anyone could dream at that size, to which she responded, “Anyone can paint with oils, too!” and 2) I wasn’t sure I could pull that off technically, to which she responded, “Figure it out!” I’m glad she was adamant because I eventually did figure it out and the response when these debuted during Art Basel Week in Miami was glowing across the board. Yes, anyone can dream at 40” x 40”, but my mid-format pieces are unique to my overall vision and hang together as part of a cohesive body of work. Also, the “multi-scale” modifications to DeepDream made by Joseph and Chris in order to enable more harmonious larger pieces had the unanticipated benefit of enabling a “dream within a dream” experience on the upscaled details. As a result, both my large format and mid-format works create multi-level experiences, but in different ways: whereas the large format works deliver a far vs. near experience that contrasts a photographic reality at a distance with a digital fantasy up close, the mid-format works create a singular but recursive experience, one that is clearly a dream even from a distance but which reduces in scale in a fractal-like manner as one approaches.

SYNERGY:

As I reflect on this project, and attempt to elucidate the ways in which my Dreamscapes differ from other applications of DeepDream, a guiding principle that comes to mind is my intention to actively and deeply collaborate with an artificial intelligence. While I hold no illusions that this intelligence is sentient, unlike others who may have a passing interest in seeing what an off-the-shelf version of DeepDream can do to their images, I am engaged in a relationship with this intelligence that is pushing each of us to develop and mature. And while the efforts of my ingenious engineering colleagues have granted DeepDream superpowers, so has this modified version of open source software unlocked a superpower for me in that I can now create compelling works of art with a complexity and richness that I could never execute fully on my own (zoom in on this Dreamscape). Interestingly, accepting this superpower has required giving up a degree of control in that I can’t really tell the software exactly what to do and, in fact, I honestly don’t even fully understand how or why it’s doing what it’s doing. This is a bargain I believe many of us will have to make in the future of our work or even daily life with the rapid advancement of artificial intelligence and deep learning systems. But to me this is a fairly optimistic story because there is no sense in which the computer is trying to replace me, thwart my intentions, or suppress my vision. After all, it has no innate desire to create art, nor any ability to discern which of the parameter settings are most aesthetically pleasing to other humans. As one AI researcher at Google stated, “It’s not about what a machine ‘knows’ or ‘understands’ but what it ‘does…’ ” It’s just a tool, albeit a very powerful tool that is somewhat beyond our comprehension. Ultimately, however, I still make the decisions as to how to steer it and what to keep or discard.

FUTURE:

Moving forward in this experiment, I continue to collaborate with my original engineering team making further modifications to DeepDream to refine its results, and to integrate alternative neural network training data to widen the gamut of visual possibilities available to me. I am also actively collaborating with a second deep learning research team on integration of additional AI-generated effects to explore different avenues for future artworks. Finally, I have begun investigating virtual reality applications of my Dreamscapes with private and public research labs, which preliminary results indicate will be a fruitful direction worth pursuing.