Explorations of AI Art — Episode 26
[This interview has been previously published on Cueva Gallery’s blog on August 24, 2020]
“Deconstructing my own artistic process and teaching it to painting robots is my attempt at a better understanding of myself.” — Pindar Van Arman.
Margaret Boden, Research Professor of Cognitive Science at the University of Sussex and advisor to the Leverhulme Centre for the Future of Intelligence, believes that the creative process is still somewhat unclear and computational technology helps to understand more about the different types of human creativity. 
Boden highlights how important is for humans and machines to work side by side in order to look into both human and machine creativity, “because the brain itself is a bundle of interdependent elements which support thinking and behavior that’s describable on many different levels.” Creativity and aesthetics, thinking and behaving, “they all in the end boil down to questions about information processing, and that’s why they are all so closely linked.” 
Is it only humans who are creative or can machines be too? This is the fundamental question that AI artist and roboticist Pindar Van Arman addresses in his work and for which he has an answer. Indeed, according to the artist his robots are at the verge of creative autonomy.
As he explains, his most recent robots use “deep learning, neural networks, artificial intelligence, feedback loops and computational creativity to make a surprising amount of independent aesthetic decisions.”  Over time, the robots have become very sophisticated, overcoming the state of pure assistant and becoming something that shows a creativity of its own, and that can be used to increase the creativity of the human artist as well. This aspect has brought the artist “to consider the possibility that all art is generative.”
The uncanny valley in the work of Van Arman is the attempt to replicate human creativity, and this makes people uneasy. His robots are not the artist, but they are creative and they can make decisions on his behalf. Like a human artist, the robots remember their past work and try to improve it, allowing their style to evolve over time, watching what they are doing and making adjustment in progress.
In 2018, in his talk about Artificial Creativity in Seoul, Korea, Van Arman compared the work of Generative Adversarial Networks (GANs) to the process of refining an artistic idea. Creativity and the act of creation are looked through the eyes of the machine which works generating something out of the noise, exactly as a human artist would do bringing order out of the chaos. When asked about the concepts of beauty and aesthetics, Van Arman explains that the focus is on serendipity and on what could be considered interesting and evolved over time.
Although Van Arman’s work has already received international attention, the current interest of the artist is to incorporate that spark that elevates a work from art to Fine Art. For this reason, he has teamed up with photographer Kitty Simpson to teach a robot how to paint with expression, bringing to life the art project artonomous.
artonomous uses “artificial intelligence, feedback loops, and deep learning to paint on its own. It is a collection of more than two dozen Artificial Intelligence (A.I.) algorithms, all fighting for control of a paint brush. Some of these algorithms are procedural, others are feedback loops, and many are neural networks attempting to imitate how the human brain works. While each algorithm has an important role in completing the artwork, the focus of this project is to improve the neural networks responsible for creativity and imagination.”
About the Process
The training set is composed by a set of curated photographs taken or directed by Kitty Simpson, in contrast to the common use of large datasets. At the beginning the robot studies the photographs and paints a representational portrait using its AI library.
Every eight portrait studies, artonomous analyses its work taking into consideration brushstrokes and final output in an attempt to improve its neural networks. Once the neural network has gotten better, the robot paints an original abstract portrait which is judged by Van Arman and Simpson.
Finally, the creative duo decides how to proceed in order to implement emotion and expressiveness of artonomous. Van Arman then tweaks the algorithms, modifies the hardware and the code trying to achieve the desired result. Each cycle of study, which will give life to 256 unique paintings, will bring the robot closer to a complete creative autonomy and to Fine Art.
When asked if humans can appreciate art made by robots, Van Arman answered that while a robot won’t be able to make emotional art until it will become itself emotional, we can still get emotions from an artwork made by a robot.
And it is in this space that Simpson and Van Arman are directing their effort: to make a robot an artist capable of conveying a thought or feeling.∎
Cueva Gallery is currently featuring 6 pieces of artonomous until November. The collection will be composed by unique artworks and a limited edition.
To follow the art project, please visit:
References and Resources
About the author: Beth Jochim is the Creative AI Lead at Libre AI, and Director and Co-Founder at Cueva Gallery. She works at the intersection of technology and arts. She is actively involved in different activities that aim to democratize the field of Artificial Intelligence and Machine Learning, bringing the benefits of AI/ML to a larger audience. Connect whit Beth in LinkedIn or Twitter.