I chose to postpone the AMI update this month, so I could share important news from Google I/O, our annual developer conference. This year’s conference hosted the official launch of Artists + Machine Intelligence Grants, a collaboration between Google Arts & Culture and Google AI.
As the field of art with machine intelligence has matured, we’ve done our best to explore and co-create with the emerging community. As this community grows, we feel it necessary to make our support accessible to practitioners all over the world, through an open call for proposals and an annual cycle of grant-giving…
Have you seen the flowers? Spring is here to renew Northern Hemisphereans and relieve us all of sleepy retrogrades. It’s time for action! Exhibition! Publication! Conversation!
All these nouns are converging for Artists + Machine Intelligence at Printed Matter’s 2019 LA Art Book Fair (April 12–14 at MOCA in Los Angeles, CA). We’re working with Anteism Books to host a group exhibition of AMI-supported works and to publish a brand new monograph from Casey Reas.
The exhibition is called New Technologies, New Visions. It explores new forms of visualization made possible by ML, from portraits of imaginary subjects…
A monthly newsletter about Artists + Machine Intelligence
As I try to keep pace with the shifting landscape of AI creativity, and art and technology discourse, I find my attention is rewarded at whatever scale I choose to observe. From github repos to auction houses — change, advancement, and complexification are everywhere.
Here are a few things that caught my brain as it melted in a sea of ML and media this month:
Last month, we witnessed the non-release of OpenAI’s GPT2 language generation model, a “deepfakes for text” that was deemed too dangerous to publish by its…
A monthly newsletter about Artists + Machine Intelligence
When we launched Artists + Machine Intelligence in February 2016 with an inaugural exhibition and benefit auction at San Francisco’s Gray Area, we imagined that the nascent field of machine learning-enabled creation would eventually turn up in prominent places. But it was impossible to envision precisely the breadth and depth of influence that artist AI projects would soon come to have. What was once a niche field of computer science is now a flexible tool for artists and a subject of intense consideration, politically, philosophically, and creatively.
In 2018, alongside…
This summer was alive with creative AI gatherings in the US and EU. I had a chance to sit down for a few conversations which you can listen to, watch, and read below.
I spoke with artist Ian Cheng and curator Troy Conrad Therrien at the Guggenheim Museum in NYC about the intersection of AI, art, and esoteric cultural movements.
At the Serpentine Miracle Marathon I had a conversation with Jason Louv about designing AI-human relationships inspired by wisdom traditions. Below is a recap of the Marathon by the inimitable Victoria Sin.
At Google Design’s SPAN conference I…
The American road novel meets machine learning
I’ve just returned from Ross Goodwin’s AI-assisted stab at the American literary road trip, a project called Wordcar that put AI on the highway to generate 200,000 words of machine poetry. It’s a classic trope with a 21st century twist. But in our moment of tender and anxious global ecological crisis, the free-wheeling ride into the unknown mythologized by Jack Kerouac, Ken Kesey, and Hunter S. Thompson takes on a sinister shade. Those authors set out in search of freedom, masculinity, enlightenment, hedonism — 20th century values currently under renovation. …
TensorFlow is a great way for students to get hands on with machine learning, but provisioning, managing and tracking instances can be cumbersome and classrooms sharing an instance are quickly limited by CPU and memory constraints.
Our Google Cloud Platform friends recently launched a tool that should prove useful to educators teaching machine learning for creative applications: TensorFlow instance management with JupyterHub.
Google Container Engine allows admins to quickly provision Docker containers running unique TensorFlow environments for each student. With the included example, students can generate art with the DeepDream algorithm.
If you’re interested in teaching creative ML, head over to the solution guide for step by step instructions. Faculty and students can apply for a Google Cloud Platform education grant — so don’t miss out on the opportunity for free computation.
This is it. The last post in this series. Stay tuned for… next year?
Blaise Aguera y Arcas gave us a whirlwind tour of the history of brain mapping (including beautiful new visualizations of neurons) and the magic formula behind neural-net-based image generation.
Fernanda Viegas showed us how visualizations of complex systems can help us understand AI and weather, and otherwise work with non-human entities.
Valorie Salimpoor presented her research on why music gives us chills, including neural images of anticipation and reward, and specific musical techniques (composers…
This is the third of several posts in which we’ll share lectures from the Music, Art and Machine Intelligence conference that took place in San Francisco, CA on June 1, 2016. Previously: Part 1. Part 2.
Did you think that was it? That we couldn’t get more creative with MI?
Below, you’ll see a MI-augmented drumstick prosthetic by Gil Weinberg of Georgia Tech, robotic brushes by Columbia’s Hod Lipson, Memo Akten’s MI-based lighting control and gestural interface for music, and Magenta’s MI-generated compositions on a simulated Moog synth (presented at MAMI by Adam Roberts, but depicted here in its Moogfest incarnation).
In our last installment, we explored two conjoined activities: generating with neural nets and investigating their inner states. The lectures featured below introduce further modes of understanding machine learning.
Hannah Davis walked us through her TransProse project, which translates literature into music through MI-based emotion mapping.
Michael Tyka presented an outline of art history, observing the accelerated nature of kitsch and the absorption of novelty by art viewers.
Artist Tivon Rice capped the session off with drone photogrammetry of urban architecture, coupled with neural-storyteller text generated from these images and trained on corpora of city planning submissions and public responses.
Artists + Machine Intelligence program lead, Google Research