The art side of AI at transmediale + CTM

Lexachast by Amnesia Scanner, Bill Kouligas and Harm van den Dorpel

Last week, Ars Electronica announced its 2017 theme as Artificial Intelligence — The Other I. The festival follows other established art institutions that have grappled with the topic over the past year including the Tate, BIAN, transmediale and Science Gallery Dublin. While we muse over what Ars Electronica will show in September with its focus on the “cultural, psychological, philosophical and spiritual aspects” of artificial intelligence (AI), I wanted to look back at some recent exhibitions on the topic. This post will cover the AI-related projects at transmediale + CTM. You can also read about my accounts of AI art in 2016 at Retune and NIPS.

The 30th edition of transmediale took place between 2nd February and 5th March 2017 at Haus der Kulturen der Welt in Berlin. Titled ever elusive, the digital art and culture festival explored the incresingly blurred boundaries between nature, human and technology. CTM, the sister music festival starting a week earlier, probed the strategies used to unleash and harness emotion through music. Alongside the two festivals, there was a plethora of independently organised events, exhibitions and performances held across the city as part of the fringe vorspiel programme. Below are the projects I found most memorable involving AI on a conceptual or technical level.

Harm van den Dorpel: Lexachast + Death Imitates Language

Lexachast by Amnesia Scanner, Bill Kouligas and Harm van den Dorpel

transmediale opened with the audio-visual performance Lexachast, a collaboration in every sense of the word — between the two festivals transmediale and CTM, between the artists Amnesia Scanner, Bill Kouligas and Harm van den Dorpel, between human and machine, visuals and sound. Harm van den Dorpel’s algorithms filter random NSFW imagery from across the internet, generating graphic live-streaming visuals that fade into each other. The filtering process was done with the open_NSFW classification model and word2vec tag cloud analysis, the artist live curating the output during the performance. The visuals were accompanied by a distorted soundtrack that draws on our strenuous relationship with the internet’s onslaught of unthinkable material. You can view the online version of Lexachast here.

Harm van den Dorpel: Death Imitates Language show at Neumeister Bar-am

During transmediale, Neumeister Bar-am gallery presented a solo exhibition of Harm van den Dorpel’s Death Imitates Language. This series of works investigates how meaning is developed in generative aesthetics using micro feedback and a genetic algorithm. There is a public website with speculative works generated from sequences of information inherited from parent artworks. These ‘genetic’ codes determine the elements present in the work and their constellation. Micro feedback given by the artist act as subjective (‘natural’) selection that changes the population of works as time goes by. Alongside visitor statistics and simulated aging, this feedback leads to the genetic program mutating and (arguably) improving over time. Once the works reach their optimum state, they are turned into physical objects. Five works were exhibited at the gallery alongside the overview of the breeding process on a monitor.

Constant Dullaart: DullDream

Constant Dullaart: DullDream

Perhaps one of the most technically unusual applications of neural networks was Constant Dullaart’s DullDream (2017). Like DeepDream, it uses a convolutional neural network (ConvNet), but instead of intensifying patterns, DullDream does the opposite: it reduces the specific characteristics of the formal shapes found in the image. Users can upload their photographs onto the DullDream website to have them returned with their features “dulled” i.e. with their identifying individual characteristics removed. This way, the ConvNets are employed beyond their usual purpose of recognising faces and speech, instead making image recognition more challenging for all those relying on pattern recognition.

Roman Lipski: Landschaften aus dem Netz (Landscapes from the Net)

Roman Lipski, Unfinished 1, 2016 — © Roman Lipski, Foto: Hans-Georg Gaul

While many artists mentioned here come from a new media or digital art background, Roman Lipski is an exception. This Berlin-based artist specialises in painting landscapes, his choice of strong colours and vivid contrasts emblematic of human interference with the natural world. Recently, Lipski decided to further develop his practice by collaborating with an artificial muse, a neural network developed by the YQP collective. It was trained on a dataset of Lipski’s paintings to generate digital pictures, similar to the style of the artist, yet completely new aesthetic works themselves. “Landscapes From The Net” presented the procedural results of Roman Lipski’s collaboration with AI.

Alan Warburton: Primitives

Alan Warburton: Primitives

Meanwhile, Alan Warburton’s Primitives, a three-channel video installation, plays at the intersection of entertainment, psychology and science using CGI “crowd simulation” software, a basic form of artificial intelligence and motion capture data. Traditionally, crowd simultation has been used in Hollywood blockbusters to fill out background humans in cities, battlefields and stadiums. It renders crowds of human extras unnecessary. At the click of a button, you can access a sea of digital bodies to be tweaked according to your desired parameters. For Primitives, Warburton captured the movements of a single dancer, Anya Kravchenko, and transformed these into the motions of a crowd. Set to a choral soundtrack, this 10-minute film tries to bring out the human and individualistic elements of the crowd as it pushes the software to its limits. It seeks to understand what happens once you “liberate the digital crowd” and give it freedom to play against experimental parameters.

Ben Bogart: Watching (Blade Runner)

Ben Bogart: Watching (Blade Runner)

Watching (Blade Runner) (2016) is the latest work from the Vancouver-based artist Ben Bogart, part of a series called Watching and Dreaming. Started in 2014, this series is an inquiry into how statistically oriented machine learning and computer vision algorithms make sense of the depictions of AI in popular cinema. The machines are able to recognize and eventually predict the structure of the films they watch, producing images that are equally based on the algorithms’ projection of an imaginary structure and the reality of the structure of the films themselves. Bogart seeks to question the nature of watching and the mechanisms that allow us to find patterns in the complex reality we observe.

Sascha Pohflepp: Recursion

Sascha Pohflepp: Recursion (2016)

Drawing on the recent trend of generating text using neural networks, Sascha Pohflepp’s Recursion (2016) engages the performance artist Erika Ostrander, who reads out a machine-generated text about humankind. The neural net was trained on texts that draw on everything from human biology and psychology to philosophy and pop culture including works by Sigmund Freund, The Beatles and Brian Eno. Then, it was asked to generate a text starting with the word “human”, creating a feedback loop between us and machine. Here, Pohflepp wonders whether — in the words of theorist Benjamin H. Bratton — “the real uncanny valley” could be one “in which we see ourselves through the eyes of an [artificial] other?”

Martin Backes: What do machines sing of?

Martin Backes: What to machines sing of? (Photo by Martin Backes)

The light-hearted installation What do machines sing of? by Martin Backes sees a fully automated machine sing the voiceless tunes of number-one ballads from the 90s, attempting to imitate the appropriate human sentiments to these emotionally loaded songs. Programmed using SuperCollider and machine listening techniques, the machine sings five karaoke classics including Whitney Houston’s “I Will Always Love You”, Toni Braxton’s “Un-Break My Heart” and Celine Dion’s “My Heart Will Go On.” The endless stream of music, each song performance rendered by the algorithm anew, yet with minimal variation, questions how machines express emotion and whether the act of replicating words and sound replicates the emotion inherrent in the original songs.

Nicolas Maigret, Maria Roszkowska: Predictive Art Bot

Nicolas Maigret, Maria Roszkowska: Predictive Art Bot

The Predictive Art Bot, the creation of Nicolas Maigret and Maria Roszkowska, is an algorithm that generates artwork concepts and prophesizes about future art developments based on the current discourse. The bot makes daily predictions on twitter, expanding the limits of human imagination with new non-human possibilities. For the transmediale exhibition, the most resonating concept on twitter was realised by the artists Jonathan Beilin & Magnus Pind Bjerre. Here, the traditional relationship between human and machine has been subverted: rather than dictating what end product the machine should construct, the human is the one who is supposed to execute the art work concept as generated by the bot. The new dystopian future of art? Who knows.

Pinar Yoldas: Artificial Intelligence for Governance, the Kitty AI

Pinar Yoldas: Artificial Intelligence for Governance, the Kitty AI

Pinar YoldasArtificial Intelligence for Governance, the Kitty AI, (2016) may well be the dream of half the internet — a 12-min video of a future ruled by a cat. This 3D-animated kitten represents a form of artificial intelligence that has taken over the world, ruling the megalopolis in the year 2039. Speaking from its future perspective to our present state, the cat details the impossibility of solving our current issues including the refugee crisis and climate change. Pinar Yoldas adds a dash of entertainment to appease the cat fans — in order to listen to the talking kitten, you have to become one yourself by putting on the headphones with cat ears in glowing blue.

With this, my overview of the transmediale + CTM artists incorporating AI at a technical or conceptual level draws to a close. The recent surge of interest in the topic both by artists and institutions suggests that we will see more artistic experimentation probing all angles of AI from the biases in training sets and the ethics of parameter optimization to the role of humans in the new machine realities. While some may consider this as unnecessary hype around the technology , it can only be seen as a positive in my view: it is crucial to add more diverse voices to the current discussions around the development and social impact of AI. This is exactly what art can help with.

If you’re working on an art project exploring AI in a technical or conceptual sense, please ping me on twitter or comment below. I would love to learn more about it.