In Focus: Mario Klingemann

DANAE
DANAE.IO
Published in
5 min readOct 5, 2018
Mario Klingemann, The Butcher’s Son, Artificial Intelligence, 2018, © Lumen Prize.

By Marie Chatel

“Francis Bacon as reimagined by AI,” notes Danielle Siembieda, deputy director of the digital art journal Leonardo, as the coder and artist Mario Klingemann won the Gold Award at the acclaimed annual competition Lumen Prize last week. The event brings back the issue around the recognition of AI art. Earlier this year Christie’s sold its first artwork credited to machine learning, which according to the auction house’s global head of prints and multiples, Richard Lloyd, was selected for its limited human intervention as it showed “the ‘purist’ form of creativity expressed by the machine.” But academia and visual artists contradict this reductive approach to the medium, claiming AI has more to offer. Case in point with Mario Klingemann who impressed the new media community while demonstrating an excellent creative process in working with AI.

Mario Klingemann, © Albert Barqué-Duran.

Self-taught computer aficionado, Mario Klingemann always wanted to work with visuals and technology despite his unusual background away from the fine arts and computer science schools. Perhaps is that the way the artist progressively built his niche through a curiosity for open source codes, alterations, modifications, and a learning-by-doing, experimental approach to visual arts.

Klingemann first experimented with generative art — writing codes that produce images — in the mid-1990s, a time when the term did not even exist. His work with AI started in 2015 when Google introduced Deep Dreams. The application which could reproduce Van Gogh’s Starry Night (1889) while covering the picture with puppy eyes of different sizes, played a significant role in developing the technology but also in grounding its misunderstanding as an art form for it allowed basic use and the possibility to make easy renders with presets and style transfers. Conspicuously the generator offered a new horizon to artists working with coding as it provided the first open source code for the production of visuals using Generative Adversarial Networks (GAN).

Mario Klingemann’s first experiment running the Deep Dream algorithm with a CNN trained on classifiying record covers, 2015, © Mario Klingemann.

To understand GAN or machine learning, it is essential to acknowledge that scientists cannot yet produce technologies with an autonomous form of intelligence. Machines’ work is statistically based, and we educate them so that they recognize certain patterns, shapes, and objects — something called “deep learning.” For instance, a machine could be trained to differentiate cats if someone fed the computer with data on cats (hundreds of thousands of images), giving inputs on “yes this is a cat” or “no this isn’t.” As such, deep learning works as a classifier, associating an image with a thing. To then reproduce its reality, coders use GANs which include two models, a generator that creates an image and modifies it according to the programmer’s wishes, and a discriminator that assesses the work of the generator and corrects its “mistakes” to reach the perfect outcome. Yes, GAN could help you replicate a masterpiece, and it could also build a somewhat realistic portrait of your neighbor’s aunt.

Left: Mario Klingemann, Self-Portrait, feedback loop of generative adversarial neural networks (GANs), 2018. © Mario Klingemann. Right: Mario Klingemann, ChainGAN portrait series, multi-chain GANs, 2017, © Mario Klingemann.

While you can train a machine to stick to reality, you can also manipulate what it learns so that it produces visuals following other criteria — in Klingemann’s case reflecting his artistic gist. As part of his working method, Klingemann first curates a selection of photographs he feeds his models with and then trains his models — both generators and discriminators — in making mistakes or sticking to a fake visual reality that corresponds to his aesthetic appraisal. The computer then generates numerous images within a latent space (a multi-dimensional, virtual environment) that is constructed overtime while running through multiple GANs, each time altering different features like noisiness, entropy, redundancy, textures, and structures to refine the image. When the general impression within the latent space satisfies Klingemann, the artist starts selecting, framing, and capturing what he finds most interesting in the computers’ visual constructions; a final step that he calls “neurography,” as in photographing what he sees within the space created through neural networks.

Left: Mario Klingemann, Freeda Beast — Bringing Things to an End, multi-chain GANs, 2017, © Mario Klingemann. Right: Mario Klingemann, Neural Decay, multi-chain GANs, 2017, © Mario Klingemann.

Klingemann shows strong similarities with surrealists, and not just in the way they invented new techniques and continuously reassessed technologies. Max Ernst inspired him for his ability to build new visual realms, explore forms of decay and materialities. Klingemann’s interest for portraiture, body depictions and the uncanny also reminds of the Modern movement. Works with flesh like the Chicken or Beef? series or with dolls and odd eyeballs like the Neural Decay series notably recall Hans Bellmer and what art historian Hal Foster calls “convulsive beauty.”

Mario Klingemann, Chicken or Beef?, multi-chain GANs, 2017, © Mario Klingemann.

Working with body shapes and nudes, Klingemann reinterprets the notion of collage as the machine merges elements to create a computer-like aesthetic involving a painterly yet blurry feel, a pixelized yet high-res quality. The Butcher’s Son (2018), which owed him this year’s Lumen Prize, captures this effect most beautifully. Another good example is My Artificial Muse (2017), an installation he created with artist Albert Barqué-Duran and which consisted of a reclining nude produced through AI. While Klingemann’s production was projected on a screen showing its digital nature, Barqué reproduced this production with oil painting trying to mimic the texture and visual characteristics of the artificial creation.

Mario Klingemann and Albert Barqué-Duran, My Artificial Muse, 2017, © Mario Klingemann.

Overall Klingemann compares AI to any artistic tool, saying: “I have lots of models now for different purposes and I can mix them or join them in a row. So it’s really like artists who work with brushes or real material because they also see like ‘this brush gives me that sort of stroke, behaves with this type of ink that way.’” Rather than editing images as part of a post-production process, the artist expresses his creativity through the modification of hyper-parameters within its models. Leading to the artist’s most urgent question, do the results constitute the art or the models themselves?

2018 interview of Mario Klingemann © Moving Ideas — Filme für Forschung und Kultur.

--

--

DANAE
DANAE.IO

Network for digital creation and its copyright management, helping art galleries and cultural institutions engage in the NFT space