The Coming Age of Creative AI: From Roboadvisors to Roboartists
By Alexey Sokolin
Something went very wrong with one of Google’s neural networks. It was designed for a simple task: identify dogs in photos. But a curious developer reversed the algorithm and it began to hallucinate dogs where there were none before. The psychedelic imagesresembled those of Salvador Dali, and echoed across the internet with the short hand “Deep Dream”.
Within a few months of this discovery, an academic paper repeated the same magic feat for famous painters. Data scientists built a set of robo-artists out of digital neuron clusters called recurrent neural networks. They used machine learning and artificial intelligence to reverse engineer visual art resembling Picasso’s dancing lines, Van Gogh’s hypnotic brush strokes and Edvard Munch’s emotional impact. We have taught robots how to make art by teaching them what makes an artistic style. And so “Deep Style” was born.
Style transfer illustration from “A Neural Algorithm of Artistic Style” by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Source: http://arxiv.org/pdf/1508.06576v1.pdf
What should we make of our creative automatons? In human affairs, children of successful lawyers and accountants often have the freedom to become creators, liberated from monetary constraints and able to dance, paint and make music. This time, our software progeny are transcending their humble beginnings. They just might become humanity’s greatest artists, amplifying and robotizing creativity.
The computer revolution has catalyzed tremendous automation, in physical labor in places like factories, and increasingly now in intellectual labor, from legal discovery to roboadvisors. As the Marc Andreessen saying goes, “Software is eating the world”. Arecent McKinsey study projects that 45% of all office work will be automated in the near future. Software processes our paperwork, searches for results, takes payments, directs cars, and talks with other systems to create lattices of efficiency. But our programs to date have been deeply analytical, following prescribed top-down rules to implement productivity tasks.
That left-brained set of rigid algorithms is about to meet its right-brained counterpart. The key is that this new sort of software isn’t replicating a set of rules to distort an image per human design. Rather, it is using sophisticated math to process visual information, extract unique patterns, and recursively learn what makes any particular artistic style unique. Then it can take off from there. Think of it as statistical intuition, not unlike our own instincts and gut impulses. Mobile apps like Dreamscope (free, amazing, on iOS/Android) allow a user to apply this machine-learned creativity to a photo on command. Dreamscope has indexed dozens of creative algorithms — a robot for each painter — and enables a user to “seed” their own machine artist. How long until every creative human endeavor has been patterned in this way?
Style transfer technology as deployed on mobile devices on Dreamscope. Original image source by Christopher Michel.
Already, we find machine learning applications in the visual arts, music and writing. The programs are young and often spit out creations that seem somehow wrong, though we cannot put a finger on why. These machine artbots are from the wrong side of theUncanny Valley — a category of things that attempt to mimic humanity but in their artifice create unease.
And yet, we have never been closer to a room of monkeys typing out the collected works of Shakespeare. Just ask a robot that has ingested all of Shakespeare’s works and is trained to generate soulful prose on command, ad infinitum. Or turn on machine-Bach, mathematically generating emotional sound vibrations that, some day, may be indistinguishable from the real thing. The below texts are neural network generated samples based on Shakespeare, which can be created ad infinitum. Source: Andrej Karpathy
Beware, artists. Automation will impact not only the analytical industries, but also those that require creativity, originality and intuition — domains that were once believed to be uniquely human. If you are an artist, musician, or writer, artificial intelligence is about to present challenges and opportunities that rivaled the ones posed to painters by the invention of photography in the 1800s. What now seems like a crude hollow reproduction of a mystical human endeavor could eventually be responsible for the bulk of all art, initiated by humans but outsourced to machines.
There are many objections to the idea that true art can even be made by software. Isn’t the human always the root of the process? Isn’t the artist’s impulse to create profoundly human? Isn’t the point of art to in some way symbolize and instantiate the unique point of view of the human artist in order to evoke a uniquely human response in the viewer or listener? Aren’t our cultural values — a result of the arbitrary and arduous evolution of a mammalian body — the only lens capable of authoring and appreciating art, as such? So what will be the message or set of values implicit in machine-generated art? These questions are fair, but in my opinion only partially relevant.
As the shift toward the machine continues, there will be increasingly less space for human execution of what qualified for creative endeavors in the past. Instead of composing music, we will create randomization algorithms that combine software-composers on the fly, reacting to our quantified moods and surroundings. Instead of learning to paint, aspiring artists will be better served to learn how to code programs that render creative outcomes in simulated virtual reality environments.
The raw materials for this revolution are in place. Wearable sensors will make it possible to create an essentially infinite data set of the images, sounds and text that humans exchange every day. Google Photos and other cognitive computing tools are processing millions of such inputs daily. Our culture can increasingly be mapped, studied and statistically modeled. Hard rules about aesthetics are not necessary when we can just point our learning machines to the recorded history of what humans believe is beautiful and meaningful. The Golden Ratio is timeless.
What will be the meaning of such “art”? Critics of the future will wrestle with such questions.
We can also simulate evolution and reward the most creative software with fitness and something resembling life itself. In 2013, engineers at Cornell Creative Machines Lab used evolutionary programming to create 3D cubes that learned how to walk: the randomized critters that ambled fastest were allowed digital offspring that moved faster with each generation.
Alexey (Lex) Sokolin (@lexsokolin) is an entrepreneur building the next generation of financial services technology at Vanare (@vanareplatform www.vanare.com). He previously founded roboadvisor NestEgg Wealth and holds a JD/MBA from Columbia University. Lex is also a digital media artist (http://urban-aesthete.tumblr.com/), and is fascinated by recurrent neural networks and creative AI.