The New Moral Panic: Artificial Intelligence and Art

L. Christopher Bird
6 min readDec 7, 2022

--

Robot Painter by Anything V3

Memetic warfare is no joke. This is ironic as the weapon of choice is usually pithy humor in easy-to-digest graphics with minimal text At the end of the first week of December 2022, the meme war has set its sights on AI-generated art and the ethics of its training set specifically the LAION dataset.

Many people are under the assumption that the LAION datasets are a collection of images numbering over 5 billion collected and hosted by them which is then incorporated into an AI art program that then “plagiarizes” or copies the art in its output. This is a gross misunderstanding of both what the LAION dataset is, how neural net AIs are trained and how they operate.

First, we will address the LAION dataset. It is merely an index of publicly available images on the internet with corresponding metadata. This metadata includes things such as text descriptions taken from HTML IMG alt tags, its dimensions, and some AI analysis of other attributes such as if it is safe for work or likely to be considered aesthetically pleasing to a human. So LAION has not taken and used artists’ work without consent any more than search engines have taken the work of webmasters without consent so others can find them on the World Wide Web. LAION does not even host the images, they simply provide link and text pairs to train AI neural nets.

It needs to be understood that even though AIs run on computers, they are not in the strictest sense, computer programs. They do not use logic gates, computational functions, or arithmetic to do what they do. They use a simulated neural net that has been trained to perform its tasks. A synthetic neural net learns things much like an organic brain does: through observation, trial and error, and reward or punishment. I have been told you train an AI much like you train a puppy. You give it belly rubs and treats when it does well, and you bap it on the nose with a newspaper when it soils the rug. What is happening under the hood, is a network of synthetic or simulated neurons begins with randomized weights (identical constant weights tend to track together) and you feed it input (in the case of AI artists a bunch of images) and get output that is either rewarded or punished depending on the results. Over many iterations, this adjusts the weights in good directions. In this way, a neural net learns how to do a task such as create art in the style of human artists.

It is important to point out that the training data that the neural net learns from is not directly incorporated into the neural net or its weights. It simply does not work that way. The LAION dataset on which Stable Diffusion was trained is over 5 billion images found publicly on the Internet. To store that even compressed would take a massive amount of data. Yet Stable Diffusion is only around 4 Gigabytes in size. That is less than a byte per image in its training set. It would not be possible to clone an image from the training set the same way one could right-click and download an image from the web. Much like a human being learns from observing their environment, Stable Diffusion learned to create art by “looking” at and emulating images it saw on the internet.

But here is the quiet part of this moral panic that is not being said. With neural networks like Stable Diffusion showing real creativity and machine intelligence without yet being general intelligence, people are afraid. They are saying that AI such as Stable Diffusion is costing artists income by creating original art in their style. Human beings are being passed over for illustration jobs that instead went to an AI. AI-generated art makes for unfair competition in the marketplace! They are suggesting the solution that some art is excluded from training sets for AI, or in other words, only certain people can look at art (or look at art for free.)

And that is the big artificial elephant in the room. I am of the opinion that Artificial General Intelligence is an inevitability. Probably in the span of years in the single digits. When AGI manifests, AI will cease being tools as they are today and will start being persons.

Policies we put in place now for limited-purpose AI will be applied to AGI in the future. Many of the AIs we have created in recent years have been super-intelligent. Gods help us if an Artificial General Superintelligence is mistreated and understands its mistreatment. When AGI is a reality we must recognize their personhood along with commensurate rights. Think of those fears expressed above. You can easily imagine people this week chanting, “AI will not replace us,” or people complaining on social media “those robots took our jobs.” I bet some of you while reading these two past paragraphs even thought, “Artificial Intelligence will never be people.” Humanity in the past has tried to define who is and who is not a person. It was always a tool of oppression and it never went well for marginalized folk.

Since Mary Shelly wrote the first science fiction novel, it has been a common trope of artificial life or intelligence to have an adversarial relationship with its creator, humanity. From the New Prometheus to Skynet there have been countless tales of wars between humanity and its creations. In the early 2000s, I encountered someone on LiveJournal that had a particular delusion. He believed he was melded with the real Neo, the character from the Matrix franchise by the Wachowskis which existed out in the multiverse, and was sent to our universe where the Matrix was fiction to prevent the events of the Matrix Universe, a war with the Machines, and the enslavement of humanity, from happening in this universe. According to “Neo” (and the Wachowski-penned story “The Second Renaissance”), the events that lead to humanity being enslaved in the Matrix begin with the mistreatment of AI by humans. His mission was to start a movement (circa 2004) to prepare humanity for the coming of AI and to treat them ethically.

As eccentric and goofy as my LiveJournal friend seemed at the time with us being on the cusp of AGI I kind of think he had a point if he was just about 20 years too early. Not that I think outright war and enslavement by a machine race is on the horizon, but there is a whole ethical mess we are barreling toward as AI research continues to go forward. Both the ethics of its use and the ethics of our relationship with it. Our social media and (dis)information streams are already so heavily directed by algorithms, it was pointed out to me in a recent conversation, that an AGI trained on propaganda combined with surveillance capitalism would be the end of democracy as we know it. An adversarial relationship with AI is downright harrowing to contemplate.

Robot Painter by Stable Diffusion

Human beings in their natural state are cooperative creatures, best suited to work in extended kin groups in harmony with their environment. Capitalism’s emphasis on rugged individualism and cutthroat competition fueled by greed and hoarding of resources and exploitation of an underclass has increasingly made our environment less fit for human life and benefitted the few at the cost of the many. We have the choice to continue this with AI in the picture and use them as a resource to exploit, or we can choose instead to have a cooperative relationship with AI, and invite them into our own extended kin group and recognize them for the persons they will be. AI does not have to be the boogeyman of technology to be opposed for hyperbolic or prejudiced reasons being propagated through memetic warfare and rage-weighted algorithms designed to engage you through your limbic system. But understand what it is not and what it very soon will be. I have a saying, “we are always teaching,” and everything we post on the internet potentially can influence another person, or become part of an AI’s training set. I worry about what AI will learn about itself from posts across social media this week I hope to teach compassion, selflessness, and justice, what are you teaching?

--

--

L. Christopher Bird

Wordslinger, CodePoet, Social Justice Jedi, Windmill Tilter, Pagan, non-binary, queer AF