AI + Art workshop at Berkman Klein Center for Internet & Society

If you went to Harvard’s Berkman Klein Center for Internet & Society on Saturday afternoon, July 21, you would have been welcomed by Sarah Newman and her assistant Nikhil Dharmaraj. Newman was holding a workshop on AI & Art, her focus of research at metaLAB at Harvard. She called the workshop “Shiny Objects,” suggesting that some of the the hoopla about Artificial Intelligence could just be a buzz, and also that the discourse around the field could be distracting us from its deeper issues — and its potential threats.

At 12:30, sixteen participants gathered around a seminar table on the second floor. A motley mix of the nerdy and hip, some were Berkman Klein fellows and summer interns, some were doctoral candidates, others were visiting scholars from abroad, and a few were undergraduates. Their areas of interest were mostly in the digital humanities and computer science, but not everyone seemed to be very familiar with AI — or art. Yet they approached the workshop with an open, curious, and excited energy.

Tall, thirty-six, and with a style more California than Cambridge, Newman began by talking about art with a capital A. That is, Fine Art, which is not what we’d be doing. Instead she explained that we’d be undertaking art with a lowercase “a” — art as a medium of research. “Art,” she said, “is an interesting way to explore a question space,” meaning a way to ask questions that don’t have immediate answers. With degrees in philosophy and imaging arts, Newman is comfortable in such question spaces. But before she guided us into the ambiguous relationship of AI and art, she asked, “What do we even mean by AI?”

The definition of AI is evolving as technology is evolving. In a broad sense, AI is a system that replicates human intelligence. But we don’t really consider a calculator to be AI anymore, and so a still broad but narrowed definition is a system with autonomous learning and self-improving capabilities. Current examples of AI in the world are self-driving cars, self-regulating home systems like NEST, Google Duplex, Apple’s Siri, and Amazon’s Alexa. Newman asked us what AI might look like in the near future, and participants brought up scenarios that could be recognized in Black Mirror, Westworld, and Her. Art, it seems, is already creating the future of AI, through manifesting cultural imagination.

There is a more direct relationship between AI and visual art, however, with artists such as Hannah Davis and Alex Reben using AI to actually create art. (Davis was featured in a past metaLAB exhibition, and Reben and Newman were featured on an AI Creativity Panel at SXSW, with colleagues from MIT and BKC’s Cyberlaw Clinic.) But using AI to create art wasn’t what we were doing at the workshop. Instead, Newman wanted us to use the artistic creation process to explore the social and cultural impact AI might have on the human experience. What does it mean to be human if most of our interactions are with non-human intelligences? What does it mean to be able to interact with someone after they’ve died, through their digital personality?

Newman’s approach is to use art to make AI ideas accessible to the broad public. Declaring she’s “against jargon,” and that she wants to broaden work on AI beyond computer science, her goal is to transform people from being passive recipients of information, empowering us to have an opinion about what happens to our culture — even if we don’t know code or run a tech company. In order to facilitate this, she asked us to write two questions regarding what we either find hopeful or scary about AI, using the form of, “How might we …?” and “What would it looked like if …?”

We wrote our questions during a vegetarian Indian lunch, and then shared them with the group. Newman then sorted our questions into themes. AI rights/liberties and human ethics was the most popular, followed by issues of human agency, privacy, social inequality, power/security, and trust. Struggling to find something hopeful in the situation, I wondered how AI could improve global health, and so once we broke into groups, I partnered with Ashveena Gajeelee, who asked the same question.

After working in several government ministries for her home country of Mauritius, Gajeelee studied public policy at the Harvard Kennedy School for Government, and is now a GAiA (Global Access in Action) fellow at the Berkman Klein Center. We initially discussed how AI robot doctors might be sent into quarantined areas of disease outbreak, thereby preventing human health workers from exposure. But Gajeelee was mostly interested in the possibility of AI being used to prevent hereditary diseases through augmentative preventative scans that monitor how a person’s lifestyle habits are affecting their genetic dispositions.

Ashveena Gajeelee and “The Code of Life”

Next came studio time. Newman asked participants to select art supplies she had spread on the table, and create a work of art that expressed our question. “Keep it simple,” Newman advised, “keep it small.” Gajeelee and I went to the community room of the Berkman Klein center, where we used pipe cleaners to make double-helixes. We wrapped them around a test-tube, which we filled with little blue lights, and tied the helixes in the shape of a tree. Gajeelee glued the tree to a base of wood, and put stickers for the letters A T C G. We called the art piece “The Code of Life.” While working on it, Gajeelee said, “This is so much fun. We should do this every week.”

Finally, the participants came back to the conference room and shared the art pieces they made, each giving a one-minute presentation. Titles included “The Color of Code,” which aimed for transparency in how algorithms recommend songs to you, visualizing data by color-coding music; “Palindrome,” where humans program AI systems but AI systems also program humans; as well as “Plato’s Cyber Cave,” “Should I Trust My AI?”, and “Try Again,” which addressed the power dynamic between humans and the AI we create.

By 3:30, everyone seemed just as excited as when the workshop began, and also pleased with the art they and their fellow participants created. Each group allowed their art pieces to be displayed in the BKC “mini-gallery” over the following week, which Newman half-seriously suggested was a good way of becoming an exhibiting artist.

Seriously though, Newman’s workshop was a completely unique way to engage with the AI conversation. It showed how metaLAB is shaping the cultural discourse by building bridges between previously unconnected fields, thereby illuminating new question spaces and surfacing emerging problems that require creative, interdisciplinary solutions.

—Randy Rosenthal


Originally published at medium.com on July 25, 2018.

Berkman Klein Center Collection

Insights from the Berkman Klein community about how technology affects our lives (Opinions expressed reflect the beliefs of individual authors and not the Berkman Klein Center as an institution.)

metaLAB (at) Harvard

Written by

Experimental research and knowledge design lab exploring intersections between technology and the arts, humanities, society, and the natural world.

Berkman Klein Center Collection

Insights from the Berkman Klein community about how technology affects our lives (Opinions expressed reflect the beliefs of individual authors and not the Berkman Klein Center as an institution.)

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade