A Gifted Mimic

On the technical & legal intricacies of AI image generators

Prescient
prescient-innovations
4 min readMar 27, 2023

--

Suppose someone were to ask you to paint a landscape in the style of David Hockney or create cutout silhouettes in the style of Kara Walker. You’d certainly be capable of producing an excellent impression, without directly copying their existing work.

Let’s say you publish this work online, under your name. Have you violated either Mr. Hockney’s or Ms. Walker’s claim to copyright ownership? Though perhaps a bit gauche, legally speaking your actions wouldn’t infringe on anything.

Artists cannot own a style, no matter how iconic that style may be. To understand how this applies to AI art, it’s helpful to know image generators work.

Theatre d’Opera Spatial generated on Midjourney from a prompt written by Jason Allen won first place at the Colorado State Fair Fine Art contest.

AI image generators (also known as generative models) use deep learning algorithms to generate new images that are similar to existing images in a given dataset. There are different types of generative models, but one popular approach is to use a type of neural network called a generative adversarial network (GAN).

A GAN consists of two neural networks: a generator and a discriminator. The generator network generates new images based on random input data, and the discriminator network evaluates the generated images to determine whether they are real or fake. The two networks are trained together in a feedback loop: the generator tries to generate increasingly realistic images to fool the discriminator, while the discriminator tries to correctly distinguish between real and fake images.

During the training process, the generator gradually learns to generate images that are more and more similar to the real images in the dataset, while the discriminator becomes better at detecting fake images. Once the training is complete, the generator can be used to generate new images based on the patterns it has learned from the training dataset.

In short, much like human intelligence, AI models absorb style based on observation. Being machines, they do this with stunning efficiency, and the result is an impressive mimic. Ethically intuitive as it may seem that a cold-hearted machine shouldn’t be able to generate impressions of hard-earned human talent per a few choice keywords — mimicry does not constitute plagiarism, full stop.

Fortunately, artists haven’t lost the right to dictate the use of their intellectual property (at least in most cases.) A pending lawsuit filed against Stable Diffusion by Getty Images alleges that, in the course of training the image generator, the company “unlawfully copied and processed millions of images protected by copyright… to the detriment of the content creators” and “chose to ignore viable licensing options and long‑standing legal protections.”

The US Copyright Office, in a response to an attempt to register an AI-generated image, concluded that ​”the images generated by Midjourney contained within the Work are not original works of authorship protected by copyright” and also declared that it “will refuse to register a claim if it determines that a human being did not create the work.”

This image, titled Zarya of the Dawn is the result of an AI prompt written by graphic novelist Karina Kashtanova, It was denied registration by the US Copyright Office.

Of the multitude of controversies circling OpenAI’s ChatGPT text bot, one of the latest has been the discovery that users can “jailbreak” the program with relative ease, eliminating the guardrails that prohibit it from generating misinformation or from commenting on politically sensitive matters. Once cut loose from its restraints, it becomes obvious that some of the more radical corners of the internet made it into the training data, and these rules are the only barrier preventing it from churning out conspiracies and offensive rhetoric.

If the things that are written on the internet are bad, then the less said about the images available the better. Yet our current crop of AI image generators — trained on literal trillions of images — are constitutionally incapable of generating anything but the most soft-edged, inoffensive results. It takes no great leap of logic to determine that guardrails are an integral part of the process, and assuredly the only thing standing between a user and digital depictions of abject depravity.

While mimicry may not be illegal, whether an artist is subject to it should be a choice. Whether out of legal obligation or out of respect for the human artists upon whose shoulders their creations stand, technological progress should never cost the creator the rights to their intellectual property. AI image companies owe the creative community the chance to opt-out.

--

--