Can artificial intelligence be used to help artists improve?
Last August, researchers published an algorithm (Gatys et al) that enables the style of one image to be recomposed with the content of another. Many in the computational arts community were immediately excited and jumped in to try it. Notably, artists Gene Kogan and Kyle McDonald shared extensive restyling experiments with recognizable styles by established artists. The results were quite impressive.
The remarkable ability to perform a creative task that takes humans a lifetime to master has already spawned popular takes announcing the end of human art, and decrying the fact that artificial intelligence is going to put artists out of business.
Its not quite so simple. There is a long history of procedural, generative and indeterminate art works, where the artist ‘outsources’ some of creation to a system, another person, or even to chance. I would argue that many formalist experiments are investigating the same thing; what does it mean to move some of the agency away from the hand of the artist?
A lot of research focuses on the promise of collaboration between human and machine. Tandem, a piece by Harshit Agrawal and Arnav Kapur of the MIT Media Lab, appeared in last weekend’s Alt-AI exhibition. Not only does the user draw in collaboration, they also get to choose the ‘character’ of their digital partner.
Flow Machines, a project led by François Pachet in Paris, is a workflow for the manipulation of computationally modeled style specifically with music and text. The idea is taken further, proposing that easy meta-manipulation of vast amounts of stylistic materials in the canon will allow the creator to stay in a flow state and iterate quickly to expand their own style.
This sounds incredibly fun and exciting to try, but there are some ambiguities as how this will help artists better their work. Its not readily clear how to address such questions as the value of physically embodying the motions needed to generate art or music, and how to amplify the strength of one’s inner vision to conceive of it in the first place.
I decided to try using the style transfer algorithm to help me develop my own style. Rather than remixing the work of others, I was interested to see what happened when I remixed myself. Can I use style transfer technology not to recreate preexisting styles but to develop a new one?
My background is in media art but I also paint sometimes. I should also add that I was born in Hawaii and am the daughter of a florist, which explains the tendency to paint oversize tropical flowers every time I sit down. Figurative art interests me because of the fact that its often disregarded as uninteresting and naive in contemporary art circles. However, there are ways that it can be very personally revealing in a way that abstract or conceptual pieces can’t.
Like others who draw and paint, I’m antsy to be better at it. I’d especially like to embark on more complex and richer textures. In a way that feels sacrilege to the craft and physical ritual of painting, I thought, what if I could just take a shortcut to developing my own style?
I decided to draw the same portrait of a woman surrounded by flowers, in an arrangement that is similar to earlier paintings of mine. This time I tried to sketch the same portrait five times, but with five different qualities. To keep an admittedly subjective creative experiment on the methodical side, I wanted the five versions to exhibit a scale of expressive qualities, from timid and naive, to more gratuitous and haphazard.
I then drew five textures, and tried to capture a range of possibilities in regards to the color, density, and expressive feel of each one. In this case, it seemed a little forced to come up with a system to grade the range of textural expressions in the same way I did with the portraits. More importantly, the textures needed to work effectively when processed with the content. It took a bit of trial and error to gauge what constituted not too much but not too little textural material.
I then applied a style transfer on every combination of content and texture. The result is a matrix of twenty-five versions of the same portrait, combined out of elements that I had drawn, but recombined with many details I wouldn’t have thought of myself.
Sometimes areas that were a little unresolved in my sketch looked fine after the style transfer, and sometimes areas that I hardly thought needed editing suddenly looked weird. Besides self-editing assistance, the other value, however, was removing the risk of spending weeks and materials on something I might not like. It makes me motivated to try things I otherwise wouldn’t.
Though style is generally an elusive thing to define, I’ve noticed that working with this technology leads one toward using stylistic approaches that are more effective than others. In examples using existing work, Pointillism, Fauvism, Cubism and late Van Gogh tend to have striking results. They seem to have in common a surface texture that is very palpable in its own right and less in service of the content of the image.
As a next step, I’m interested in re-embodying the images back into the hand of the artist, or extending the participation of the artist. I would be interested to run a neural net algorithm layer by layer, intervening with a human modification on each one. Or perhaps I could paint directly over the machine-generated results, exerting my hand when I “disagree” with my helper.
I used the Dreamscope App for this iteration of the project. It made it easy and convenient to process images as I was painting. Check out the images generated by other users; they are quite interesting!