Putting the Art in Artificial Intelligence, Part One: Generative Design
As a member of a small team that presents and represents digital artists, I’ve watched artificial intelligence (AI) and machine learning (ML) gain significant ground in the five years since we began our San Francisco-based agency. This is the first of two articles looking at these phenomena as they relate to the making of art.
The dream of complex and uniquely human acts being carried out by machines has a long lineage. Transformation of inanimate material into something sentient is an idea found in the myth of Pygmalion, in Ovid, which is to say before the time of Christ. The earliest known version of this story is coincident with an invention of Ktesibios of Alexandria, a water clock with a regulator that maintained a constant flow rate. This invention, in the words of Peter Norvig and Stuart Russell, “changed the definition of what an artifact could do. Previously, only living things could modify their behavior in response to changes in the environment.” A mechanical calculator for mathematical functions, the Antikythera mechanism, appeared at roughly the same time, circa 200 B.C.
The Antikythera mechanism can be seen as the beginning of a few thousand years of inquiry. A remarkable body of thought and research, drawing a line through the lives of Aristotle, of DaVinci, of Leibniz, Babbage, and Turing, is now reaching some of its goals. Machine learning is being used effectively for the early detection of cancer. Market analysis and data mining, autonomous vehicles, speech recognition, and household robotics have proven highly adaptive to AI and to machine learning. The capacities inherent in neural nets, in deep learning, in processing huge amounts of data and recognizing patterns in that data — all this is profound, thrilling, even frightening, as we ponder where research and new applications could land us as understanding deepens, as further advances are made.
And the arts?
In the mid-twentieth century, artists were keenly interested in the capacities of “thinking machines,” as well. The algorithmic artists of that era hoped machines could take the burden of decision-making off their shoulders. Their idea was that, once set in motion, programs designed to allow a certain randomness to intrude would spin off inspired variations of the artist’s original design. This would leave artists free of the seemingly endless business of coming up with and executing inspired ideas of their own.
Through a series of developments every bit as remarkable as other advances in the AI field, this has come to pass. Early efforts of algorithmic artists Georg Nees, Frieder Nake, Manfred Mohr, Vera Molnár were forerunners of brilliant, generative imagery by contemporary artists Christian Loclair, Refik Anadol, Can Buyukberber, and by the Istanbul-based graphic arts collective Ouccch. Touch Designer, Blender, Maya, and vvv are programs enabling iterative modeling that have revolutionized the creation of visual art and creative graphics.
When machine learning is used to enable generative design, there is mimicry of nature’s evolutionary approach. Designers and engineers input design parameters. Once this is done, the software explores solutions, generating hundreds or thousands of design options. Examples of this process in use — with fashion, with architecture, with sculpture, with animation, and in the aerospace industry — have grown and multiplied in a way analogous to iterations spun off by the software itself.
Musical artists now employ iterative and generative processes, as well. Rather than choosing every note in every passage, responsibility for generation of material, within set parameters, is handed off to software. There are commercial products that promise (and deliver) machine intelligence applied to music composition. David Cope at UC Santa Cruz has been working with machine learning via a program he calls Emmy since 1985. In a concert at U. of Oregon, audience members asked to decide which of three pieces written in the style of J.S. Bach was by a computer. Most got it wrong. “Bach is absolutely one of my favorite composers,” Dr. Steve Larson said to the New York Times (Larson’s music was also on the program). “My admiration for his music is deep and cosmic. That people could be duped by a computer program was very disconcerting.”
As the promise of this technology is being realized, human being are finding increasingly creative ways to tap it. Nick Collins and Andrew R. Brown point out, in an article on this topic: “There is a continuum based on the fuzziness and definition of the rules, and where the interventions of the human authors exist in the process.” The same may be said of the visual realm, where we see intervention at every step along the way, once AI, ML, and iterative processes are involved: human beings step in to redirect a process, to pull out a particularly juicy result, or to start out again in a new direction, iterating from a chosen sample set.
How much autonomy can computers be given? How determined can these results be? A bit of a sensation was created in 2017 when paintings were shown at Art Basel, created with a system from the Art and Artificial Intelligence Laboratory at Rutgers. Deep neural networks were taught to generate work. These paintings did not fit known artistic styles (Pointillism, Fauvism, Abstract Expressionism, etc). The system was trained using 81,449 paintings in the publicly available WikiArt data set, and then given the assignment of making new art, using the data set as a basis from which to generate original creations.
The human response was interesting. When respondents were asked to rate how intentional, visually structured, communicative, and inspiring the images were, they “rated the images generated by [the computer] higher than those created by real artists.” (The phrase “real artist” may one day be looked on with amusement, as machine learning in the arts becomes more common. Collectors have already bought paintings that were created using ML).
More interesting yet is the phenomenon that has been remarked on frequently in recent years by those in the field, having to do with process and result. The complexity of computational processes, and the incredible scope of number crunching that learned machines are occupied with — this precludes knowing how machines get to the results they deliver. We know what is fed in. We know what comes out. But we know only in the most approximate way how results are achieved. In this very particular way, machine learning is like our own human intelligence: powerful, and enigmatic.
Self-evidently, a short article on this topic must be cursory. But it would be irresponsible to ignore the distance we still have to travel in understanding and emulating basic aspects of intelligent behavior. Futurists like to refer to singularity, a moment when computational resources will reach a superhuman level of performance (Vinge, 1993; Kurzweil, 2005). Yet, even with a computer of unlimited capacity, we still wouldn’t know how to achieve the brain’s level of intelligence. We’re a long way from understanding cognitive processes and how they work. There is almost no theory extant on how an individual (human) memory is stored. Computers that can truly understand us and hold conversations with us, when they do arrive, will require a remarkable advance.
What is undeniable is that computers are facilitating human ingenuity through what may be fairly described as collaboration with a new kind of intelligence. As described in Norvig and Russell’s “Artificial Intelligence,” it may be more important “to study the underlying principles of intelligence than to duplicate an exemplar. The quest for artificial ﬂight succeeded when the Wright brothers and others stopped imitating birds and started using wind tunnels and learning about aerodynamics.” In similar fashion, we are advancing our capacities by recognizing that what we have built up from wire and silicon is — like Pygmalion’s clay sculpture come to life — something new, extraordinary, and not entirely knowable.
March 25, 2019
Link to Part Two:
Putting the Art in Artificial Intelligence: Rooms That Know You’re There
Author Clark Suprynowicz is a composer, and the founder of Future Fires, a “curation agency for brilliant, innovative artists from around the world working at the frontiers of the code-based arts.” futurefires.com
The Future Fires installation Resonance is on exhibit now at the Tech Museum of Innovation in San Jose. Resonance is an interactive, digital evocation of water. It was created in partnership with Yves Peitzner Labs (Munich), with Kling, Klang, Klong (Berlin), and was commissioned with the generous support of the Knight Foundation.