The Future of Art with Machine Learning

Image generation with VQGAN + CLIP Colab by Katherine Crowson & Tutorial by Sam King

Hannah Johnston
7 min readJul 22, 2021


Caveats: I lack the art history, machine learning, philosophy, and legal expertise necessary to do this topic justice. My background is a mishmash of design & technology. I recently stumbled on a powerful tool that I believe has exciting implications for art creation. I wanted to share my thoughts and experience in hopes of finding more work on this topic and sparking further conversations with the above-mentioned experts.


For months, I’d been looking for a way to break into Machine Learning (ML) for practical purposes. I wasn’t keen to learn enough to build my own tools, I just wanted to use them to generate imagery for my own photo and video art projects. I tried (and failed) to work with a bunch of projects I found on GitHub. But months later, by some stroke of luck or Twitter-time-wasting, I stumbled on this tweet of a Lisa Frank/Dali hybrid image:

It was the kind of image I wanted to create and it just so happened to come with the detailed instructions necessary to DIY this thing (Thanks, @images_ai). I opened the Google Colab, changed some text, and ran it. In minutes I was seeing … something! It was so surreal and frankly a little bit magical.

A loose painting-style image with parts of Keanu Reeves’ face, horse body, and hair
My first attempt at machine-generated art with the prompt: “keanu reeves centaur photorealistic”

Talking to the machine

Over the next couple of weeks, I used Crowson’s colab to generate dozens of images, attempting to build an understanding of the kind of results that were possible through trial and (lots of) error. I learned from other image makers, who shared tips and techniques on social media–some more bizarre than others, like using keywords “unreal engine” to get a hyperrealistic image. I’m confident that a better knowledge of the underlying ML processes would have helped, but the brute-force approach was so much fun.

Image generated with the prompts: ”the movie the matrix featuring cats” (Left) and “the relationship between mind and matter” (Right)

While these results were nothing short of amazing, it was only once I began experimenting with other artistic “media” that I became obsessed. It started with watercolours. Something about the stylization made them more convincing as art works.

Generated watercolour images of a road, an abandoned town, and Times Square

Revisiting Art History

I continued (and still continue) exploring a wide range of art styles and movements, some more successful than others –

Cubo-Futurist Robots (Left), Art Deco painting of David Bowie (Center), and Precisionist Cardboard Boxes on a Sidewalk (Right)
Fauvist interpretations of The Masks We All Wear To Survive (Left), Early Modernist Tina Turner in the style of Marie Laurencin (Center), and German Expressionist painting of The Nightmare of Socializing
Our climate future in a Baroque style (Left) and Sight of Enclosure in the style of René Magritte (Right)
Post-Impressionist painting of a Robot in the style of Henri Rousseau (left), Abstract painting of Dancing Figure in the style of Janos Mattis-Teutsch (Center), and Op art of Infinite Loops in the style of Julian Stanczak (Right)
Carebears in the style of Odilon Redon (Left) and Die Hard movie John McClane at Nakatomi Plaza in the style of Caspar David Friedrich (Right)
Housing Crisis in the style of Zdzislaw Beksinski (Left) and Live Laugh Love in the style of Gustave Doré (Right)
Hudson River School mid-19th century American oil painting of the rat-infested dumpsters behind a fast-food restaurant in the style of Thomas Cole (Left) & Late 19th century watercolor and ink painting of the 1976 Apple 1 desktop computer in the style of Charles Dellschau (Right)

Beyond paintings

I was even able to uncover a range of media types:

Artists bowing down to our machine overlords in the style of Henry Darger (Right) and Abstract Black and White Engraving of a Futuristic Robotic Figure
Art Nouveau pattern textile of tech gadgets in the style of William Morris, Linocut print of forest destruction, and Post Modernist splatter graffiti painting of a Robot Silhouette in the style of Richard Hambleton

Over the past couple of weeks, I’ve learned a lot more about art history than I ever did in school. The colab gave me the opportunity to apply the stylistic properties of historical art movements and different media to new subjects and experiences. The machine interpretations sometimes even help make visible patterns we wouldn’t otherwise pick up on — things the artist may not even have been aware of. The machines learn from existing data sets and all of their inherent biases. While the included data sets are massive, they‘re far from encompassing the entirety of the world’s imagery, nor the diversity therein.


In an attempt to gain a little bit more control over the generated art, I started experimenting with starter images (both photographs and rudimentary digital drawings). These provide, at least in theory, a more direct guide for the composition.

Starter grayscale, hand-drawn image (Left) and Orange cat with flowers in the style of Maud Lewis (Right)
Photo input of Ward’s Island Bridge in New York City (Left) and image generated in the style of Alex Colville (Right)

But, the longer they run, the more the image gets away from you and takes on a life of its own.

Starting image of an avocado (Left) and image generated in the style of Georgia O’Keeffe (Right)

Whose art is it anyway? What even is art?

The results feel (somewhat ironically), tangible and personal, despite our relatively small role in their creation. In this regard, on copyright and ownership, generated images raise many questions. There was a big leap from pencils and paintbrushes to cameras and from cameras to digital. I think we’re making another such leap. Tool and code creation in this domain seems, at least currently, much more relevant and deeply connected to the creation process.

Maybe more critically, the image datasets themselves are foundational to the process. The datasets are not compiled from images in the public domain, but artists have always drawn inspiration from other artists. Is it different if the new work actually relies on stylistic or structural elements of those images? There is certainly some art skill involved, but how much depends on a variety of factors.

Wonder in the style of Agnes Pelton

I spend an embarrassing amount of time trawling through art, crafting prompts, selecting iterations, refining and re-running variants that I think might generate some approximation of the thing I’m imagining. Occasionally they do. But more often, they create something else entirely and, in some instances, something even better than what I imagined. It’s probable that the exact images in this article might not have been uncovered were it not for my obsessive experimentation— but how much of that is persistence vs. luck? And how much will that matter once the tools become more powerful and widespread? We might see something akin to the Infinite Monkey Theorem play out.

It’s not even clear that time or effort invested are the right measures. Plenty of people have stood in a (likely modern) art gallery and said “I could have done that”. And yet, it doesn’t stop that art from being museum-worthy.

What’s next?

While most of the images generated today remain less than convincing replicas of the original artists’ works, the rate of tool development and improvement is staggering. I used this technique of starting with a reference image to generate keyframes as input to yet another tool (EbSynth) in hopes of creating a video in watercolour style.

While I was painstakingly compiling the individual frames for the video above, the ML-image-generating community was already automating a similar approach. In my opinion, the results aren’t yet quite as strong as the mostly-manual approach, but I’d bet that’ll change soon. Maybe even in the next few weeks. As extensions like this emerge, we’re not only seeing the lines blur between machine-generated and hand-crafted art, but also the creation of entirely new types of art, not previously possible.

It’s an exciting time to be involved in this space. There’s plenty of art to discover, but there are also many moral and ethical issues to sort out. If you share these interests and concerns, or just want to see loads more of this weird image exploration, find me on Twitter:



Hannah Johnston

PhD student at Carleton University, studying AI art + UX · she/her