Music, Art & Machine Intelligence 2016 Conference Proceedings

Kenric Allado-McDowell
Artists + Machine Intelligence
5 min readJun 28, 2016
Conference participants asking questions on the role of technology and machine intelligence on the practice of making art.

On June 1st, Google’s AMI and Magenta groups jointly hosted a conference on MI/ML and creative practice, called Music, Art & Machine Intelligence. The roughly 80 attendees and 29 presenters represented a broad range of perspectives on music, art, and machine intelligence, as well as neuroscience and philosophy. Each presentation lasted only ten minutes, but the variety of disciplines, the art and music demos, and regular breaks in the San Francisco sun and air kept brains and bodies stimulated.

While “islands” of research do exist, interdisciplinary efforts are bolstering an image of creative research and researched creativity. Generative art (whether genetic, rule-based, or hallucinated) fills the humid equatorial regions. The great northern tundra is home to ever more efficient and ingenious MI techniques, while nomadic neuroscientists and non-denominational wanderers traverse the plains between.

Computer scientists, artists, neuroscientists and psychologists shared their latest research into creativity and the brain.

Look in the knapsack of any member of these tribes and you’ll see a squirming plethora of generated entities. Google’s Rahul Sukthankar alone was found holding three varieties. He presented neural-net-generated fonts, mathematical proofs, and less fantastically, image compression. Mario Klingemann, an independent artist currently in residence with Google’s Cultural Institute, showed surreal generated collages. His rule-based systems jabber endlessly, while he plucks the tastiest fruits from his strange garden. Jason Yosinski took an alternative approach to surrealism, showing us how computers see lions in fields of static where none exist, and generally prodding around in unseen regions of image recognition systems.

NYU’s Emily Denton presented adversarial techniques for generating, among other things, bedrooms. Other new techniques were showcased: Aaron Courville of University of Montreal discussed Variational Auto-Encoders (VAE) and Generative Adversarial Networks (GAN), and Google’s Samy Bengio presented a new approach to training neural networks called Reinforced Maximum Likelihood.

A cluster of talks on new generative tools took things to the next level of interactivity. Rebecca Fiebrink of Goldsmiths treated us to some Mego-worthy industrial noise, controlled by her highly playable Wekinator software, a combination of Leap motion tracking and on-the-fly perceptron manipulation that enables access to the subtleties of embodied knowledge acoustic musicians take for granted. Hannah Davis of NYU crossed the already-crossed streams with her TransProse project, which translates literature into music through emotion mapping. By reading the emotional temperature of a text with NLP, Davis created timelines that could be composed to algorithmically. Her early experiments sounded very mid-20th c. and atonal, while later explorations became impressionistic.

Mike Tyka and Chris Olah were on-hand to situate us art-historically and geometrically, respectively. Tyka observed the accelerated nature of kitsch and the absorption of novelty by art viewers. Olah demystified neural nets so we could understand them as simple high-dimensional manipulations of geometry and not slime-breathing multi-eyed dog-monsters.

Artist Tivon Rice capped the session off with exquisite corpses of another nature, namely drone photogrammetry of buildings under construction in Seattle, a hotbed of urban change like many cities in the US right now. His project with AMI pairs neural-storyteller text generated from these images and trained on corpora of city planning submissions and public responses. His show is currently on view at Threshold Gallery at Mithun Architecture in Seattle.

In the segment titled Creating with Machines, Gil Weinberg of Georgia Tech showed us his regime for training robots to listen to and generate music. Musical augmentation is just around the corner; Weinberg is working on a drum-centric prosthetic arm that can follow (and fill) along using MI. We saw examples ranging from a simple swing ride pattern to black metal-ready 20Hz snare blasts, bringing restoration of human ability into super-human territory.

Columbia’s Hod Lipson showed us physical works painted by a robotic brush. The artist in question reproduces existing works and generates new ones using MI. The (human) artist Ian Cheng brought a whole host of entities into the mix, some of whom were controlled by voices from the bicameral beyond. His video pieces are real-time simulations of small scales societies, or, in one case, a sponge-y animal-vegetable hybrid. These simulations produce unexpected and unpredictable behavior, in beautifully emergent and entrancing ways.

Resident AMI artist Memo Akten presented MI-based lighting control that responds to the motions of dancers. He also showed a gestural interface for music, intended to provide the feeling of performing classical piano without stressful years of training at the hands of a brutal Russian master. Expect to see more from Memo as he completes his residency with the AMI team.

If you missed it at Moogfest, you could have caught it at MAMI — Magenta’s Adam Roberts showed off the group’s latest music sequence generation. Roberts played a musical phrase on an MI-enabled illustration of a MiniMoog synth, which improvised on the theme. It may not have been ‘Trane but it was well-trained and reliably melodic.

Then the aforementioned neuroscience nomads showed up to blow our minds. Google’s Blaise Aguera y Arcas brought it all back to the brain, with Ramon y Cajal’s early neural images in their eerily circuitous formations, Alan Turing’s MI-prescience and the interrelated nature of perception and creation. Valorie Salimpoor showed us how music gives us chills, and how composers manipulate our cognitive rewards system and physiological response to recognizing patterns. Elizabeth Margulis showed us the musical and emotional power of repetition, with a hypnotic example. Musical tension and release really mess with dopamine. How long can you take the krautrock?

AMI artists Sheldon Brown and Ross Goodwin landed us squarely in the uncanny, with Brown’s Shepard-tone enhanced, spiraling spatial installations wreaking havoc on our perceptual grounding, and Goodwin’s LSTM-generated poetry and interactive-MI-writing systems destabilizing our linguistic grasp on reality.

The day ended on a collective philosophical note in a panel discussion with Sageev Oore (St. Mary’s Nova Scotia), Timothy Morton (Rice University), and Google’s Carolina Pantofaru, Martin Wattenberg, Blaise Aguera y Arcas and Douglas Eck.

The field of MI-enhanced creativity is wild, and in many ways, unexplored. It was clear at the MAMI conference that a multidisciplinary approach is not only fruitful and necessary but also entertaining and thought-provoking. Perception and creation are indeed two ends of one kaleidoscope, and the multi-sensory ways of knowing that art and music provide are essential in deepening our investigations of creativity, technology, and humanity.

--

--

Kenric Allado-McDowell
Artists + Machine Intelligence

Co-author Pharmako-AI. Co-editor Atlas of Anomalous AI. Co-founder @artwithMI . Opinions channeled. Thee/Thou