Episode 6 | With artificial intelligence, artists no longer create pieces, they create creation*
For forty years, Wolfgang Beltracchi painted counterfeit works of Raoul Dufy, Georges Braque, Max Ernst… Fake paintings that the worldly bohemian con artist, with the help of his wife Helene, then sold for tens of millions, fooling even the greatest experts. Beltracchi and his wife were finally arrested in 2010 by the German police and later sentenced by the courts in Cologne the following year (they live free and in Switzerland today).
But what is probably the biggest scam in the history of art does not stop there.
Venice, September 24th, 2019, ArtTech Forum
For Carina Popovici and Christiane Hoppe-Oehl, this is where it begins. “Because we realized that there is no objective method to identify fake works of art,” explained Popovici during her pitch at the start-up competition organized by the Swiss foundation ArtTech recently in Venice.
Carina Popovici, a doctor of theoretical physics, met mathematician Christiane Hoppe-Oehl while they were both working for UBS in Zurich. Passionate about art, they had the idea of applying the methods of artificial intelligence to the detection of fake paintings. This is what led them to found their start-up, Art-Recognition, in January 2019.
“We use convolutional neural networks,” explains Popovici. These deep learning algorithms recognize images by fragmenting them into sub-parts called tiles. By training these neural networks using high definition photos of at least a hundred already authenticated works, their AI distinguishes the characteristics of a painter’s style, his brushstroke. From there, the algorithm is able to analyze the photos of the paintings on which there is doubt. But does it work?
“We started by testing our software with about ten Wolfgang Beltracchi forgeries,” says Carina Popovici. “It worked every time.” Better than traditional methods? “Those methods are based on stylistic analyses by art experts, provenance research and chemical analyses,” says Carina Popovici. “ And counterfeiters find ways to get around them.” The Beltracchis had, for example, developed an infallible technique combining falsification of labels with reference to famous collections and exhibitions, reused antique canvases, and old pigments.
In a market where the number of experts is shrinking, partly because negative expertise is increasingly being sued, the Art-Recognition algorithm has the advantages of speed and impartiality. Also, one does not need to expedite the paintings because the AI uses photos. In the end, it is much cheaper and the demand for such service seems enormous.
There are no statistics that can accurately record the number of forgeries that exist today. But the explosion in art prices and the negligible sanctions due to the omerta that reigns in this field (beneficiaries, galleries, museums, auction houses, etc., do not necessarily want to reveal that they have been fooled or to see the value of their paintings fall apart) have led to a gigantic phenomenon. The figures circulating estimate the number of counterfeits and false attributions on the art market to be between 30% and 50%. Just for Cocteau alone, Annie Guédras, the painter’s renowned expert, has identified more than 1,700 forgeries.
Art-Recognition’s AI looks like the solution. Except that it is up against the conservatism (and some interests) of the art market. As a result, it is the experts who call on the services of the Swiss start-up. “We receive many requests for the authentication of impressionist paintings,” says Carina Popovici. She hopes that in the long run the technology will become a label of authenticity.
If artificial intelligence is used in the service of art to authenticate paintings (or to predict the future value of a piece, as in the case of the start-up Artrendex), it should not be forgotten that the same technologies can also produce the infamous digital “deep fakes”. An increasing number of artists are now diverting these creative AI abilities to build creative machines. In this field, we can even speak of a gold rush of art using artificial intelligence.
Paris, October 2nd, 2019, La Défense Esplanade
A rapper’s cap with a stylized Vitruvian da Vinci man screwed on his head, Gauthier Vernier tells the story of how he found himself, at 25, at the origin of this gold rush which started with the sale of a painting made by an AI for 435,000 dollars at Christie’s at the end of 2018.
“After leaving my business school, me and my friends Pierre Fautrel and Hugo Caselles-Dupré found ourselves a little idle looking for our first jobs,” he says. “In his time as a machine learning researcher at Softbank, Hugo came across a Canadian computer scientist, Ian Goodfellow, who in 2014 was describing a new machine learning architecture: Generative Adversarial Network (GAN). Because it had not yet been applied, we sought to do so in the field of image recognition. And because machine learning demands a lot of data, we started with art images that are widely accessible online. »
The logic of GANs is to put two algorithms into competition. The three friends decided that one of these programs would generate fakes from images of real paintings to try to deceive the other program until it can no longer make the distinction. The programs were trained for 24 hours on images of 18th and 19th century portraits, and three confrontations later a first series of 11 pictures, unrecognizable for the authentication algorithm, was produced. In an homage to Ian Goodfellow, the three friends decided to give them his name in a French translation: de Bellamy, and a different first name for each painting as if it were a family.
Put up for sale for 10,000 euros in a gallery, one of these paintings was acquired by collector Nicolas Laugero-Lasserre who exhibited it at 42, a code school created by Xavier Niel, the founder of the telecom operator Free. “This led to media exposure,” continues Gauthier Vernier. “A few weeks later, we received a phone call from Christie’s. They were offering to put one of the paintings up for sale in New York.” Estimated at between 7,000 and 10,000 dollars, the portrait of Edmond de Bellamy, signed by the GAN’s mathematical formula, was fiercely fought over by three collectors who made the bid soar above 400,000 dollars.
The art world was astonished. Many people looked down or criticized the “trick” of the three amateurs, but the three friends only saw validation in the vitriol. They created their start-up/collective of artists Obvious, which tests other AI technologies.
Obvious has also embarked on new projects of AI-generated art work, starting with a collection of 11 characters and 11 landscapes generated from the photos of 13,000 Japanese prints. The works are physically produced by a specialized workshop (Uki-Ga) using the traditional Japanese Moku-Hanga wood printing technique. “Half of it has been sold and other pieces are about to be exhibited at the Hermitage Museum in Saint Petersburg and then in Saudi Arabia,” says Gauthier Vernier. On the smaller basis of three drawing books by Leonardo da Vinci, Obvious has also produced images of recent technologies interpreted in da Vinci’s way by an AI.
When asked to what extent Obvious considers their artificial intelligence to be creative, Gauthier Vernier prefers to speak of inventiveness. “We choose the genre, the training images, the support or the frame of the painting. It is a very powerful tool but it remains a tool,” he concludes.
The list of examples of artificial intelligence used today as tools for artistic creation grows infinite. Among the most innovative are the start-up Largo in Lausanne, whose AI predicts and makes suggestions to increase the chances of the success of film scripts; gaming company Hello Games, whose No Man’s Sky uses GANs to create an unlimited digital universe with new planets appearing all the time; and in London, rising architectural star Arthur Manou-Mani, who uses AI to create buildings, such as the Burning Man Festival Temple in 2018, that “are optimized for the site, its environment, and all the parametric constraints that influence a shape.”
But it is in Los Angeles that we find what most closely resembles the future of the convergence of art with artificial intelligence. Because for the Turkish-born digital artist Refik Anadol, artificial intelligence is not just a tool. It is both the subject and the material of his artistic approach. With results that are as avant-garde as they are fascinating.
Los Angeles, August 27th, 2019, Refik Anadol’s Workshop
Located in a former industrial hangar in the gentrifying Elysian Valley district, Refik Anadol’s immaculate workshop speaks of a rich digital future. Behind their screens, his 12 collaborators — speaking 12 different languages and averaging 25 years of age — are putting the finishing touches on the exhibition they will open a few days later in New York: Machine Hallucination.
On the walls, impressions extracted from more than 300 million images of New York City hint to the larger work at hand. The source images were collected by a dedicated program from social media, search engines and library sites. But the project is careful to cull public data only and all people and faces have been erased in order to strictly respect data privacy ethics.
Refik Anadol’s kinetic artwork is built from these data. A first algorithm defines the context of the images while another, known as a recurrent neural network, absorbs associated or recorded sounds from the city (traffic noises, police sirens, etc.) to create a soundtrack. Finally, an artificial intelligence called StyleGAN, developed by graphic chips leader Nvidia to generate imaginary faces from real photos (based on Google TensorFlow’s machine learning tool), was adapted in order to “dream”. The end product is an AI that constantly generates new images from the visual associations resulting from the work of the different algorithms.
The result is striking. In perpetual motion, these representations, connecting an infinite number of architectural styles, have something organic about them. It looks like moving thoughts. This is the goal of Refik Anadol. “I think that making the way AI thinks visible is a priority today,” he says.
Sitting at the end of a large meeting table, the gentle and mischievous artist begins by explaining the journey that brought him to the top of digital art at the age of 34. “From the age of 8, my passionate zeal for my first computer and the vision of Blade Runner made my mother think I was going crazy, until I met digital art pioneer Peter Weibel and I learned to use my materials: light, public buildings used as living canvases, data, and finally AI.”
For his thesis at Bilgi University in Istanbul, he transformed a wing of the contemporary Santral cultural complex into what he calls a “living sculpture”. Using the Perlin Noise software developed for special effects, he projected images onto the building tuned by the flow of sounds of the neighborhood. The video of this work called Quadrature would become viral on social networks. The work eventually led to further studies at UCLA, where he teaches today.
In 2016, Refik Anadol was selected by Google’s Artists and Machine Intelligence program for a residency. There he learned to use artificial intelligence programs and in particular DeepDream, the AI created by Alexander Mordvintsev, Christopher Olah and Mike Tyka, to find and reinforce characteristics in images.
This would become the basis of his first art installation based on artificial intelligence, Archive Dreaming, which was conceived as an immersion into the documentation of the SALT Galata Museum in Istanbul. Next, with Melting Memories, he reconstructed the way memory is acquired by brain and computer. In 2018, he used the Los Angeles Philharmonic Orchestra’s 54 terabytes of archival music data to give the Walt Disney Concert Hall, built by Frank Gehry, a way to express itself and its legacy with images projected on the facades. At Charlotte Douglas Airport in North Carolina, he created a 110 square meter digital sculpture powered by all the hub’s data (passengers, planes, luggage, etc.) in real time.
If about ten passengers missed their planes when contemplating this digital kinetic sculpture, it is because it calls out to them. To what end? Refik Anadol does not think that artists have to provide solutions. “It’s the work of engineers and designers. Artists are there to ask questions.” Which ones? “How the machine controls us because that’s what we’re talking about today. What does it mean to be human when a monkey learns to use Instagram in two minutes? What does it mean to be more connected by technology and at the same time more disconnected from the world? »
By making artificial intelligence not only a tool but also a material and a subject, Refik Anadol has opened a new form of narrative. More and more museums and exhibition spaces are following in his footsteps to reinvent themselves, as we will see in the next episode of our exploration of the miracles of art and technology.
Editing: Katherine Lingenfelter
Written by Fabrice Delaye
*Our title is derived from a quote of robotic creation pioneer Nicolas Schöffer