Can machines be creative?

As human beings, we share the somewhat egotistical belief that we are somehow unique in our ability to create art. We convince ourselves that we are the only beings capable of being creative—creativity typically being the last bastion of humanity in a world overtaken by machines. I don’t believe that this is the case at all. Just as history has consistently proven, we are no more unique than the chimpanzee or the fish or the diatom. Computers, and Artificial Intelligence in particular, serve as the perfect analogue for proving this theorem correct.

The advancement of computers and robotics has kickstarted a steadfast race to replace humans in almost every job across every sector. Yet, the full capabilities of artificial intelligence are still fiercely debated — many refuse to believe that automation will venture as far as creativity. It seems that the creative sector has some sort of inherit, untouchable “sacredness” that cannot be fulfilled by mere machines. However, with respect to technological trends and innovations in neural networks, it appears that the opposite is true: the artist is just as susceptible to automation as the cashier or assembly line worker. The truth is, artificially intelligent machines are capable of not only producing creative work indistinguishable from their human counterparts, but are capable of creating more novel creative work in substantially less time.

Do the way humans think vary from the way machines do? Image © 2016 Kurtis Chen

Who’s at risk?

Telemarketers, librarians, cashiers, animal breeders, assembly line workers, servers and cooks alike share a >80% probability of being replaced by machines. At one time, it cost CBS over $2.2 million to hire an army of paralegals and lawyers to rummage through millions of legal documents — a job that can be done today using fast and inexpensive “e-discovery” software.

The better a job can be broken down into rules and procedures with definite measures of success, the more likely it is to be automated. This trait accommodates for the written syntactical language of machines, which, for example, processes tasks through “if”, “and”, “then” and “else” functions, ultimately relying on logical arithmetic to become productive. These syntactical languages, however, necessitate human intervention to function — someone needs to write the code — and in that sense are bound by the limitations of human forethought.

Disrupting this status, however, is unsupervised machine learning—a component of AI which gives machines the capability of recognizing patterns at their own accord without environmental feedback or human interference. Such learning yields impressive potential for the automation of tasks which would otherwise be too difficult for humans to codify, such as jobs which return unquantifiable results or jobs which encounter dynamic problems. These applications of machine learning aren’t science fiction, either — take IBM’s Watson, for example, which draws upon massive databases to diagnose cancer patients and recommend courses of treatment in a matter of minutes, rather than months.

The significance of unsupervised machine learning should not be underestimated: it is the basis of powerful AI, and it gives computers the ability to “understand” tasks and problems on their own. Yet, despite these circumstances, the creatives I spoke with still seemed reluctant to believe that their jobs could be replaced by a machine. Understandably, humans are weary of whirring silicon boxes that will produce work without the necessary preamble and life-experience of humans, but such are the fears of the threatening unknown.

What do creative people think?

I interviewed three working creative professionals to gain insight into the human side of automation. For the purpose of anonymity, I will annotate my interviewees as ‘A’, ‘B’, and ‘C’. A’ works as a creative director at a burgeoning advertising agency; ‘B’ is a celebrated music video director with over 63 million online views; and ‘C’ is a cinematographer with over five years of experience shooting features and commercials. I asked the interviewees one simple question: do you think computers can be as creative or more creative than humans?

A: “Computers, even really smart computers, are only good as the humans that operated or set them up… there are certain details that a computer just wouldn’t know about, say the client or final outcome… It’s an identity thing. I don’t know if a computer can relate to a human the way a human can relate to a human.”

B: “I think computers can connect tangible ideas together in a pretty clear manner, but abstractions will probably be difficult. I think when you put abstractions that’s where a human would dominate.
“I think subtleties, symbolism, cultural nuances… a computer might not be able to apply to that.
“You look at someone like Salvador Dali and I don’t think a computer could create that, ever. I don’t think that it would be able to, especially with the substance behind it that Dali could apply.”

C: “I don’t know if we’re ever going to feel like a computer can be more creative on its own, as a person, but I think as we explore technology we’re going to find more and more that computers are our greatest tool.
“…what we weigh as creative output has a fundamentally human element in it, that you need to be a human to generate something that another human is going too be able to relate to.
“Just boiling creativity down to the definition of combining random elements, a computer can do that, and a computer can get really good at doing that, but I don’t think a computer’s going to have that insight that it knows that what it came up with is a great idea.”

‘A’’s response focuses on two ideas; first is the belief that computers require human input, and thus are limited by the inputs of a human: “computers, even really smart computers, are only good as the humans that operated or set them up.” Second, is an issue of identity, where ‘A’ doubts that a “…computer can relate to a human the way a human can relate to a human.” ‘B’’s response explores abstraction, context and intricacy, or a computer’s lack thereof: “You look at someone like Salvador Dali and I don’t think a computer could create that, ever.” ‘B’ claims computers would be unable to apply “subtleties, symbolism [and] cultural nuances.” Last, ‘C’ acknowledges the possibility of computers as “our greatest tool” — perhaps because ‘C’’s occupation is highly technical — but doubts that “a computer’s going to have [the] insight that it knows that what it came up with is a great idea.” ‘C’ also shares some of the thoughts of ‘A’ with regards to identity: “you need to be a human to generate something that another human is going too be able to relate to.”

Reducing the above, we are left with four ideas: 1) the human limitation of machines, which binds the machine to its designer; 2) the non-human identity of machines, which renders them un-relatable and incapable of abstract thought, preventing them from generating certain art; 3) the machine’s lack of subtlety and context, which makes them inhuman; 4) the machine’s inability to be conscious of “great idea[s]”, and therefore its inability to curate its own work.

The first idea — that machines are bound by the skills of their designer — is also the first to be dismissed. As discussed earlier, unsupervised machine learning negates the need for a human supervisor, allowing a machine to supersede its original design and supplant its designer with itself.

The non-human identity of machines was raised by all three of the interviewees, albeit as slightly variant isomers of each other. Of course, this is a complicated discourse with many tangents, but by focusing on the points raised specifically by the interviewees I can cover a lot of important ground quickly: ’A’ posits that humans and computers will not be able to relate with each other, calling it an “identity thing”, while ‘B’ takes the position that, “subtleties, symbolism, cultural nuances” are only understood by humans. ‘C’ doesn’t believe that a, “computer can be more creative on its own, as a person”, postulating that computers, “lack insight [to know] that what it came up with is a great idea.” All of these responses indicate a belief that humans carry unique “human-only” characteristics; or, in other words, that robots cannot replicate fully the human presence, or “aura”. Contrary to that belief, however, are philosophies and evidence which suggests complete duplication of humans is possible through hardware and software.

How does the mind work? Can computers do the same?

Computational Theory of the Mind (CTM) simply states that the brain operates in the same fashion as a computer, and vice-versa (Ravenscroft, 2005). This theory allows for the cloning of neural processes in machine language, and therefore the duplication of human creativity in machines. However, according to some philosophers, CTM doesn’t fully satisfy the biology of the brain — they assert that neurons act more as semantic neural networks than logic-oriented symbol manipulators (Marcus, 2001). In other words, a human (semantic neural netowrk) would attach “metadata” — or understanding — to kill the dog, while a computer would only see a string of meaningless characters (symbol manipulation) in kill the dog, a vital difference which apparently separates humans from silicon. While a computer may carry out the command kill the dog, and do so in a very human way, it would not understand the implications of what it were doing.

Do we think differently from the way computers do? Well, it probably doesn’t matter.

On the contrary, however, modern computing can satisfy both semantic neural networks and symbol manipulation. While Gary Marcus’ The Algebraic Mind suggests different neural organizations to accommodate both neural networks and symbol manipulation, I would like to suggest a simpler hardware-level accommodation: modern computers can be networked with each other to form Artificial Neural Networks (ANNs) which behave like their biological counterparts by interconnecting many individual nodes. The networked relationships which occur between nodes in this ANN — like synapses in the brain — act as the basis of cognition, supervening symbol manipulation and allowing syntactic processors to replicate semantic biological processes, regardless of architecture. After all, a single neuron is no more capable of semantics than a single computer.

Despite this, there still exists a “brute force” method of reproducing human creativity in a machine. By simulating the activity of every synapse of every neuron in the brain, a computer could recreate any of the brain’s cognitive tasks, including creativity. Any output of such a machine would be as true and real as human output, since every characteristic of human cognition could be rendered by a highly granular simulation of neural activity — even down to the molecular kinetics and electrical potentials of individual cells. Naturally, a simulation of this resolution would require vast computing resources currently unavailable.

From Ray Kurzweil; The singularity is near: When humans transcend biology

But, as demonstrated in Kurzweil’s graphic above, derived from 15 academic sources, the time between major evolutionary and technical advances declines logarithmically. We can interpolate the latter portion of this trend as an indication of exponentially increasing computer power, which, according to Kurzweil, indicates “machines greatly exceeding human intelligence in the first half of the twenty-first century”. In lieu of extraordinarily more efficient algorithm-driven intelligence and ANNs, this type of “brute force” simulation would be sufficient for generating human-like creativity. For our purposes, it serves as a backup computational model if the former proves ineffective.

In addition to pure processing power, modern day computers also have access to ultra large banks of data; Big Data. Why is this important? The creative process, despite its apparent intuitiveness and spontaneity, can be systematized, ordered and thus programmed.

Boden’s Improbabilist METCS theory of creativity postulates that creativity arises from the “unconscious processes of association” that bind “novel combinations of familiar ideas”. This clearly limits human creativity to the cognizance of a single individual. Computers, however, are not held captive by this limit — instead, they have access to the collective observations of all of humanity and more. Issues of “subtleties, symbolism, cultural nuances,” as raised by interviewee ‘B’, are rendered invalid; a computer connected to online databases would be more capable of considering such aspects than a human ever could.

Whether algorithmic simulation or brute force, computers will be more than capable of rendering all aspects of human cognitive function, including those of abstraction (as one of my respondents doubted). As per the principles of physicalism, all properties of any given object emerge from its subordinate properties — if cognitive functions can be replicated, creativity and all of its associated humanness will emerge from that replication, indifferent to the fact that they were generated electronically rather than biologically.

Does the definition of art complicate machine creativity?

Certainly, any discourse about machine creativity — especially if one is to suggest that machines can be more creative than humans — will raise questions about the definition of art itself. Logic dictates that you must define art before determining whether or not machines are capable of creating art — the ultimate Turing test for “true” human-like machine creativity. However, through the evidence and principles I have discussed, I do not believe that the definition of art presents any burden to machine creativity at all: whatever scrutiny human creative output may receive with regards to artliness, creative machines shall equally receive it. Again, physicalism supports the logic that creativity must result from the cognitive processes secondary to it, as it does in humans, and emerging from that creativity will be the production of art.

Nonetheless, there is still a component amongst commonly held definitions of art which may undermine physicalism — one which deals with intent. An artist’s intention behind creating any given artwork, or the art observer’s intentions behind their viewing of an artwork created by an intentional artist, may in itself define a work as art. Recalling ‘C’’s statement, “[machines] lack insight [to know] that what it came up with is a great idea.” Indeed, intent demands consciousness, and consciousness may be an impossible affordance of artificial intelligence; while physicalists will argue that consciousness will certainly arise from simulated cognition, non-reductionists will argue that such consciousness will only be a simulated proxy of the real thing. The well-known Chinese Room thought experiment — which posits the computer as an unknowing symbol-manipulator — demonstrates this assertion.

Machines can’t be conscious? Neither can you.

There is a critical keystone in this problem of consciousness which I plan to unravel — and that is consciousness itself. Evidence strongly suggests that humans are not conscious themselves. Michael Graziano, a neuroscientist at Princeton, dismisses consciousness as “a credulous and egocentric viewpoint,” describing an attention schema model of the brain instead, and accounting for self-awareness as a “slightly incorrect”, “cartoonish” representation of the physical world.

Just as our historical intuition told us the world was incorrectly flat, our intuitive perception of consciousness or self-awareness may be equally incorrect. A Washington University study conducted with participants under continuous MRI scanning found that regions of the brain “began to encode the outcome of [an] upcoming decision” up to four seconds before actual awareness of the decision, which was complex and abstract enough to average 17.8 seconds to complete. Humans, it appears, have just as little agency in their decision making as a machine or any other instinctive organism. That leaves consciousness — and intent in particular — as nothing more than an illusion and a null point in any discourse about artificial intelligence.

Humans have long believed that we carry some sort of uniqueness. Throughout history, we have incorrectly categorized ourselves as exclusive bearers of language, emotions and tool use. The evidence presented here proves that creativity is not exclusive to humanity, either; unsupervised machine learning, CTM, artificial neural networks, brute force computing power and big data provide machines with an unmatched creative advantage. Problems of consciousness and intent are merely a cognitive fabrication. From technical capabilities to the mechanics behind creativity and humanity, we have seen that machines are apt to duplicate our yield as productive, creative individuals, despite our compositional differences.

Monumental change is on its way

When the United States Atomic Energy Commission ignited a 5 Mt nuclear warhead under the earth of Alaska, 22 earthquakes of varying magnitude were recorded over a course of three months as a direct result of its detonation. The test, dubbed Cannikin, demonstrated the immense physical force humans could exert on a planetary body; enough to literally shift tectonic plates. Humans, through the evolution of scientific discovery, had somehow harnessed and controlled the enormous potential energy of nuclear fission, unleashing it as we pleased to observe its effects.

As we continue to make bounds in technological development — ever increasing the processing capabilities of our computers — we run towards the certainty of unleashing similarly immense intellectual power. There is little doubt that machines will be capable of replacing humans on every level, from manual labour to creative ability. But I do not suggest that humans will no longer take positions in creative realms; instead, I suggest that humans must stand beside machines as artists, and not above or below. Human activity will still shape art, and our limited abilities will certainly remain a facet of art, an artifact of pre-machine creativity, albeit at a minute scale. Machines will always produce more novel, and thus creative, work than their human counterparts — they have access to infinitely greater resources and iteration capabilities — but machines will not always produce better art. Art encompasses context, a context which machines themselves are inescapably woven into.

Nonetheless, the ramifications of an artificial creative awakening are yet to be known, but I do not intend on illuminating that discourse. Instead, I propose that we must acknowledge the full extent of AI before understanding what issues may arise. For so long, the robots and computers we fear have been denied the possibility of an imagination; what would it mean to consider that they are, in fact, more imaginative than us?