MAMI Lectures, Part 1
This is the first of several posts in which we’ll share lectures from the Music, Art and Machine Intelligence conference that took place in San Francisco, CA on June 1, 2016.
The discovery of the generative capability of neural nets, at least in the case of DeepDream, was largely driven by the desire to know just what’s going on in there. Start with tensors or with transmitters — when you’re probing a deeply-networked structure the boundaries between exploration and creation are blurry, and very generative. The first six lectures of the MAMI conference explore the creative side of investigation and the investigative side of creation.
Google’s Samy Bengio presents a new approach to training neural networks called Reinforced Maximum Likelihood.
Google’s Rahul Sukthankar presents neural-net-generated fonts, mathematical proofs, and new image compression techniques.
And Jason Yosinki takes an alternative approach to surrealism, showing us how computers see lions in fields of static where none exist, and generally prodding around in unseen regions of image recognition systems.