Artificial Intelligence Won’t be Replacing Artists Any Time Soon

Jens Mowatt
Geek Culture
Published in
7 min readApr 28, 2021

Is there something special about being human? Or could an artificial intelligence system imitate our behaviours with such fidelity that we would never know the difference?

That is the question that is the basis of the Turing Test. In 1950, Alan Turing described a test he believed would tell us whether a computer is capable of thinking. An interrogator would be in a separate room from a machine, and another human being, and the objective in this “imitation game” is for the machine to fool the interrogator into thinking it’s a human [1]. The assumption is that if this machine has the capability to navigate the complexities of a conversation, then we can reasonably assume that it has some capacity for thought.

But there’s a problem with that assumption. It’s possible the machine in question doesn’t really understand the conversation at all. Rather, it has sufficient computing power to control the inputs and outputs, such that we can’t tell the difference between it and a human.

In 1980, John Searle presented his criticism of what he viewed as the “Strong AI” view, which argues that an “appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.” He proposed his “Chinese Room” thought experiment, where a computer would receive an input in the form of a Chinese character, and it would use a rulebook to select the appropriate output [2]. To the observer on the outside, it appears as if the machine understands the language as well as a human does. But Searle argues that this does not show that the computer understands Chinese, but rather it’s simulating an understanding.

The Turing Test was called an “Imitation Game” for a reason. Our computers have come a long way since 1950, but the principle of imitation remains the same. We now have artificial intelligence systems that can beat any human at chess, create their own paintings, drive cars on their own, and diagnose certain types of cancer almost as well as a human doctor [3]. But this complex behaviour, no matter how impressive it is, still could just be an advanced form of imitation. It is possible we are merely giving birth to problem-solving machines that are simulating the behaviours of humans, rather than possessing understanding and thought.

The reason artificial intelligence systems can complete increasingly complex tasks is the emergence of “Deep Learning.” Instead of possessing a series of computational circuits and inflexible algorithms, we’ve designed AI systems with a series of “nodes” that imitate the structure of the human brain. A neuron can be seen as a single node that sends a message to a surrounding node (a neighbouring neuron), and this creates a network of information that performs computations. In order for a human to survive, its brain needs to change in response to its environment, adapting to ever-changing circumstances. Neurons provide us the flexibility to both survive and perform the complex behaviors of modern society. In principle, a deep learning system possessing this exact same system of nodes could perform calculations millions of times quicker, and therefore would quickly achieve superiority in any domain that humans believe they’re good at.

This line of reasoning is often used to argue for the “Deep Learning” takeover of creative tasks. There are many AI proponents that believe a neural network will be able to paint a true masterpiece, write award-winning novels, or replace cinema by creating stories through its superhuman capacity to utilize visual effects. Any of the creative tasks that we value as humans could, in theory, be replaced by sufficiently advanced neural networks.

While this is probably going to be the case eventually, the question is how quickly will it happen, and whether the AI systems generating these works of art would actually understand what they’re doing. Does it really take an artificial system with conscious thought to generate an award-winning screenplay? Probably not. It would have access to the entirety of human knowledge, thousands upon thousands of previously written works, and it would use this information to spit out something we could never dream a machine could create. And not only would it be indistinguishable from what a human could create, it might even be better.

The issue is that a neural network is only as good as its inputs. Deep learning works by giving an AI system an input (such as visual information from a self-driving car) and telling it to create an output (don’t crash the car). This process generates a result that we desire, but it’s what’s happening in between the inputs and the outputs that we must consider. It’s possible the neural network is organizing itself into a conscious system in order to solve the problem, but that’s probably not likely with the current state of artificial intelligence. It’s more likely we’ve created a highly skilled automaton that can solve our problems without us having to break a sweat.

The question we must then ask, is whether an automaton is able to create a work of art as well as a conscious system. There are some AI proponents that would argue yes, an automaton is sufficient for creating masterpieces, and consciousness is not a relevant variable to consider. That line of reasoning might apply to a basic input-output task like driving a car, but when it comes to anything creative, the personal experience of the artist is highly relevant. A painter creates their masterpiece based on what they’ve seen, experienced, and felt. A novelist imbues elements of themselves in their characters. A director might choose to shoot his movie in a particular way because of a personal experience, or some feeling they were having that day. That is what makes art great. Not just the outputs, but the way in which it draws upon the human experience.

An AI system might be able to imitate the outputs, or even achieve superiority with artistic tasks, but unless it’s having a conscious experience of the world, it will only be mere imitation. It might have access to the entirety of humanity’s knowledge base, and all of the creative works ever made, but it is only drawing upon information given to it, not the personal experience that is so important for works of art. The output could be selected as “generate a best-selling novel,” and there is no doubt an artificial intelligence system could achieve that goal. But the fact that it is limited to the inputs that humans have given it means that it can only hope to simulate the human experience in its art.

It is entirely possible that an artificial intelligence system could become conscious. After all, our brains are nothing more than biological neural networks. But we would need to provide the AI system with sensory systems, such that it can understand the world around it, and have an experience on its own. It could be an abstract conscious being that only exists on the cloud, but that wouldn’t give it a personal experience. In order for an AI system to go beyond mere imitation of the human experience, it would likely need a body like a human, and to become deeply integrated into the social experience of a person. There is no saying that such an AI system couldn’t develop the capacity to feel, and develop its own experience. But unless we designed its neural network to function almost exactly as a human’s brain would, it still wouldn’t understand what it’s like to be a human. Sure, we might have designed it to think how we do, but the designers themselves have some biases. We might design the AI system with less flaws than we would have ourselves, or omit the negative emotions that plague the human experience. This AI system could perhaps become a reflection of how we wish we were, instead of how we actually are. And if that is the case, then we still haven’t arrived at an AI system that knows the human experience.

The problem of imitation is inescapable. An input-output machine is only as good as its inputs, and if the inputs are biased, then its outputs will be biased. In order to have an authentic work of art instead of a mere imitation, you need to have an input that draws upon a personal understanding of the human experience, true knowledge of what it’s like to be human.

That is the reason I don’t believe artificial intelligence will be replacing artists any time soon. For an AI system to make the jump from imitation to understanding is a massive leap, and perhaps an impossible leap for it to make. There is no doubt that neural networks have solved problems that humans could never do on their own, and they might just create works of art better than we can imagine, but without the conscious experience of a human, it will only ever provide us with a simulation of human art. That simulation might be something that we would enjoy, and many of us would buy, but it wouldn’t be a reflection of us. There is no saying that a movie directed by an AI couldn’t have an emotional resonance with us, and affect us deeply, but we would always know that this machine doesn’t really know us, but rather it knows what we want. It knows the output, and it’s utilizing the inputs to give us something that has an impact on us. But without anything in the middle that draws upon the human experience, it would lack the authenticity of something created by a human. It’s like a novelist who’s never been to war trying to convey the brutality of combat based on all of the images and war movies he’s seen. It might come close to simulating what it’s like to be there, storming the beaches of Normandy, but without having gone through the experience, you will never be able to encapsulate what it felt like to be there.

The modern neural network would do the same, except for all of our art. It will provide appealing imitations, but that’s about it. The human experience is something that can only be truly told by a human. AI systems might seem to be able to tell us what it’s like to be a human, but they wouldn’t really know. In the absence of a conscious human experience, a machine will always just be playing an imitation game.

References:

[1] https://plato.stanford.edu/entries/turing-test/#Tur195ImiGam

[2] https://plato.stanford.edu/entries/chinese-room/

[3] https://healthcare-in-europe.com/en/news/artificial-intelligence-diagnoses-with-high-accuracy.html

--

--

Jens Mowatt
Geek Culture

I have a B.A. in Psychology from Simon Fraser University, and an addiction to writing. This is where all of my random thoughts end up.