MAMI Lectures, Part 4

Previously on MAMI Lectures: Part 1. Part 2. Part 3.

This is it. The last post in this series. Stay tuned for… next year?

Blaise Aguera y Arcas gave us a whirlwind tour of the history of brain mapping (including beautiful new visualizations of neurons) and the magic formula behind neural-net-based image generation.

Fernanda Viegas showed us how visualizations of complex systems can help us understand AI and weather, and otherwise work with non-human entities.

Valorie Salimpoor presented her research on why music gives us chills, including neural images of anticipation and reward, and specific musical techniques (composers: take note!)

More music! Elisabeth Margulis discussed the power of repetition in music, and the heightened emotional response that comes with familiarity.

We finished off the day with demos of machine generated art.

Sheldon Brown took us deep into the uncanny valley of symbiotic human-machine imagination, where familiar landscapes behave in strange ways.

Ross Goodwin showed us how he co-writes with machines, generating (and reading) an only-recently-possible poetry.

And Ian Cheng walked us through simulated worlds he creates with simple rules, inspired by, among other things, Jaynesian theory of mind.

We capped the day with stimulating panel discussion and insights from Sageev Oore, Caroline Pantofaru, Timothy Morton, and Martin Wattenberg, hosted by Greg Corrado.

Thanks for staying with us. This is just the beginning. For more, follow us on Medium and Twitter.