MAMI Lectures, Part 1

Kenric Allado-McDowell
Artists + Machine Intelligence
2 min readJun 29, 2016

--

This is the first of several posts in which we’ll share lectures from the Music, Art and Machine Intelligence conference that took place in San Francisco, CA on June 1, 2016.

The discovery of the generative capability of neural nets, at least in the case of DeepDream, was largely driven by the desire to know just what’s going on in there. Start with tensors or with transmitters — when you’re probing a deeply-networked structure the boundaries between exploration and creation are blurry, and very generative. The first six lectures of the MAMI conference explore the creative side of investigation and the investigative side of creation.

Google’s Samy Bengio presents a new approach to training neural networks called Reinforced Maximum Likelihood.

Other new techniques are showcased by Aaron Courville of University of Montreal. Specifically, Variational Auto-Encoders (VAE) and Generative Adversarial Networks (GAN).

Google’s Rahul Sukthankar presents neural-net-generated fonts, mathematical proofs, and new image compression techniques.

NYU’s Emily Denton explains adversarial techniques for generating, among other things, bedrooms.

Mario Klingemann, an independent artist currently in residence with Google’s Cultural Institute, shows his surreal generated collages.

And Jason Yosinki takes an alternative approach to surrealism, showing us how computers see lions in fields of static where none exist, and generally prodding around in unseen regions of image recognition systems.

--

--

Kenric Allado-McDowell
Artists + Machine Intelligence

Co-author Pharmako-AI. Co-editor Atlas of Anomalous AI. Co-founder @artwithMI . Opinions channeled. Thee/Thou