Machine learning is emerging as a new creative tool for artists. Will it replace its creator?

User ChrisMcKellen inputs 33 stock photographs of blonde, blue-eyed toddlers into his computer interface. One child, whose skin is so pale it’s nearly tinged blue, is dressed in a baptismal gown. Another toddler has rosy pink cheeks and a cherubic smile.

Next, ChrisMcKellen uploads 30 drawings of aliens. The pictures are all saturated in somber green and nightmarish blue hues, but the aliens themselves are diverse. Four images feature serpent-bodied creatures that have fangs dripping with spittle. The last alien has human hands, an emaciated rib cage, and elongated, cornucopia-shaped horns protruding from its head.

What does a baby-alien creature look like? ChrisMcKellen switches the interface to its “creative morph” setting, telling the artificial intelligence algorithm to merge the two photosets and generate amalgamated images.

With these instructions, the algorithm begins analyzing the shapes and contours of the toddlers’ photographs: the saucer-shaped eyes and round, chubby fingers that grasp a wrinkled, threadbare blanket. The algorithm then hops to the other set, iteratively morphing the toddlers’ photographs to resemble the alien portraits. For one output image, it elongates a baby’s head, pixel-by-pixel, until the child’s skull becomes an extraterrestrial antler. In a second, sepia-colored image, fangs are superimposed onto the cherubic toddler’s smile.

“This is art,” Ahmed Elgammal tells me as we scroll through the collection of output images. Elgammal is the ​​founder and director of the Art and Artificial Intelligence Laboratory at Rutgers University. In 2019, he created, a site that allows ChrisMcKellen and 30,000 other users to accessibly explore and experiment with AI to produce new forms of art.

Each artist has their own creative process. Users can use the platform’s algorithms to digitize and translate their existing work across mediums: a charcoal sketch can be re-rendered as oil on canvas. Others use the AI to create new images that they recreate into sketches or sculptures. But as Elgammal explains, the AI’s greatest utility is its ability to generate illustrations, including baby-alien creatures, that the human mind can’t easily visualize.

Through free, accessible platforms like, AI may be able to democratize art, helping both amateurs and trained artists develop computer-rendered images in diverse styles. But a growing trend of algorithm-based art also raises critical questions. Can AI truly enable artists to more effectively create what their mind envisions? Or does the tool interfere with traditional human creativity?

Over the past two decades, AI has rapidly developed to imitate and outshine traditionally human actions: reading maps, playing chess, and even diagnosing tumors, AI can often execute these tasks with more efficiency and accuracy than the average human adult. But autonomous artistic design has eluded AI.

Then, in 2018, the New York auction house Christie’s sold Portrait of Edmond de Belamy, an algorithm-generated print in the style of 19th-century European portraiture, for $432,500. The sale seized international attention, dividing artists and apprecianados.

On one hand, Portrait of Edmond de Belamy seemingly signaled a troubling trend of automating a classically human pursuit. Yet, art is also a way of capturing a moment in time, and it’s impossible to ignore the role of machine learning in our lives, today and in the future. At the same time, art has historically served as a vehicle to communicate human emotion and express imagination. It’s one thing for AI to be capable of intaking vast amounts of data (for instance, Pablo Picasso’s 13,500 paintings or hundreds of Michaelangelo’s sculptures) to spit out an auto-generated image. But to meaningfully understand and create art is to know what humans like, how they think, and how they see the world. An AI that understands all of that has the power to do much more than just paint portraits.

“Absolutely powerful. And so moving, don’t you think?” Elgammal scrolls through the baby-alien pictures with the proud, nurturing gaze of a parent watching their child win a talent show.

The grotesque but innocent-looking baby-alien faces were created with modified generative adversarial networks (GANs), a machine-learning framework that has dominated the exploration of AI-generated art in the last five years. Like other machine-learning methods, GANs use a sample set — in this case, images of art — to deduce patterns, and then use that knowledge to create new pieces. The model consists of two networks to create images: a “generator” and a “discerner.” The generator produces new outputs — images, in the case of visual art — and the discerner tests them against the training set to ensure they comply with whatever patterns the computer has gleaned from that data.

Elgammal, who is evidently a fan of aliens, describes the crucial role of the human artist and their relationship with the GAN. Imagine two aliens land their spaceship in your backyard, wanting to learn more about art and human behavior. You assign them to make pictures of flowers, with one alien serving as the artist (the generator) and the other as the critic (the discerner). Neither knows what a flower is. So, you show the aliens hundreds of flower photos, so the artist alien can recreate a flower illustration and the critic alien can compare their partner’s results with the photos you’ve given.

The aliens don’t truly understand flowers: aside from physical appearance, they can’t distinguish between a daffodil and a tulip, and the feelings and human experiences associated with flowers (like a tranquil walk through a garden or irritable pollen creeping into your nose) are completely lost. But if the aliens see enough flowers, they may be able to learn a visual pattern and try to replicate or judge it in an image.

Technology often ushers in unforeseen forms of art, and artificial intelligence may simply be the newest and flashiest medium. Like the advent of daguerreotype photo-creation in the 19th century or the first uses of handheld video cameras, it may take decades of experimentation for AI-based techniques to evolve out of their nascent stages and achieve broad acceptance. For one, AI-art’s progression into the mainstream art world may require it to leap from personal computer screens to more conventional domains.

Researchers at the Harvard metaLAB are already combining AI with traditional museum and gallery-viewing experiences. In collaboration with the Harvard Art Museums, the metaLAB’s team has experimented with machine-learning tools to create Curatorial A(i)gents, a three-month series of interactive art exhibitions.

Lins Derry, a designer and principal at the metaLAB, explores the integration of spatial models with computer interfaces. Her movement-based project, titled Choreographic Interfaces, enables viewers to engage with a machine-learning display through a dance-based vocabulary, where a visitors’ full-body gestures are translated into instructions for the computer.

In a demo of the project, Derry stands in front of nine 3x2 computer panels as a webcam stares back at her. She spreads her arms in a “Welcome Home” gesture. The movement has a grandeur that resembles an eagle prepared to take flight, but exudes the elegance of a ballerina. A former professional dancer of 30 years, Derry flows into a plié pose with her arms raised slightly above her hips, forming two symmetrical triangles with her torso. The computer responds to her movements by scrolling down the webpage. Derry shifts again, her arms creating a circular O above her head; the computer refreshes the webpage in response.

The program is admittedly limited: when I aggressively macarena, the computer screen sputters and freezes, unsure of how to process my jagged, frenetic gestures. But it responds sweetly to Derry’s fluid and precise movement.

Like’s algorithm, Derry’s machine-learning model tracks shapes and contours, tracing the outlines of your torso and the patterns of its movement. For experimenting artists, the AI systems share the same flavors of novelty and excitement. But unlike the GANs and input-output models that underlie ChrisMcKellen’s baby-alien art, the choreographic interface feels more intimate: the data Derry and I feed into the interface aren’t stock photographs but the silhouettes of our bodies.

The brushstrokes dividing AI and artist — one as the creator and the other as an object for creation — are blurred.

There are no defined lines in the paintings of Faceless Portraits Transcending Time, a 2019 exhibition shown at the HG Contemporary gallery in New York. The subjects of each print are reminiscent of faces, but their heads and torsos bleed into the background. Like pictures taken by a photographer with shaky hands, the out-of-focus faces are shrouded in hazy streaks.

When the exhibition was first released, HG Contemporary called the show a “joint effort” between an AI named AICAN and its creator, Elgammal. The wording is a deliberate move meant to spotlight, and anthropomorphize, the machine-learning algorithm that did most of the work to create the 30x30 prints.

Elgammal is adamant that AICAN, like any other AI-art system, is merely a tool that relies on human control. “It’s not AI on this end and humans on the opposite end,” Elgammal says. “You still feed in the images and curate the output. You still decide whether the final product is what you want or something you toss out.”

Yet, HG Contemporary explicitly advertised Faceless Portraits Transcending Time as the first solo gallery exhibit devoted to an “AI artist” — not an artist and their “AI tool”. AICAN is also the first machine-generated artist to pass the Turing test, which assesses a machine’s ability to act like a human, so you can’t detect the difference between the computer and human behavior.

AICAN and the increasing presence of AI-generated art surfaced fears of AI artists (or tools) eclipsing their human counterparts. These concerns have only amplified with the development of new AI image generators. For example, OpenAI, an artificial intelligence research laboratory, recently released DALL-E 2, a system that can transform a simple sentence into a fully-fledged piece of visual art. And it seems the only limit is the user’s imagination. From a wise cat meditating in the Himalayas to an anthropomorphic raccoon realizing his latest Audible book is a best-seller, DALL-E 2 is already provoking a mixture of awe and terror online.

“Not gonna lie, didn’t expect ‘illustrators’ to be one of the first human jobs that artificial intelligence completely displaces, but here we are,” one Twitter user says, while another adds, “The implication of AI replacing artists was always there, and now the day has unfortunately come.”

Other artists, like Derry, argue that these fears are unfounded and colored by science fiction and sensationalized media reports. Despite depictions of being capable of thinking freely, learning autonomously, and maybe even experiencing emotions, these algorithmic machines are not that advanced. While the fiction of AI art is pretty neat, the messy reality is that artists have to confront, and occasionally lean into, the constraints of computational systems.

“So much of it is working with artificial intelligence as a tool, not only as an extension of possibility but also as a limitation. I’m having to make artistic choices in relation to those limitations, like choosing clearer movements that the AI can discern,” Derry says about her choreographic model.

She also notes the less thought-provoking limitations of AI-art, like when her coding system repeatedly crashed as she attempted to migrate the interface from Mac OS to Windows software. As she speaks, her balletic, precise gestures become more haphazard. “God, do you know how irritating it is to constantly sort through nine different HDMI cables?”

Moving past the mundane technical issues, Derry takes a deep breath. Her voice decrescendos back into a soft, almost musical tone as she explains how experimenting with AI’s boundaries inspires creativity and excitement, not frustration.

“I grew up as an artist, where dancing was a profession, passion, and everything in between,” Derry’s eyes gleam with wistful nostalgia. “These AI systems are an opportunity to push myself not just physically but intellectually and emotionally. Providing viewers with emotion first requires me to explore my emotions as a performer.”

Art is evocative, of feelings, of culture, of memories. Sometimes, it’s provocative too.

In hopes of creating a “radical” and “subversive” experience for viewers, Milan-based artists Sara Goldschmied and Eleonora Chiari designed Where shall we go dancing tonight?, an installation that consisted of empty bottles of champagne, scattered cigarette butts, colorful confetti, and remnants of clothing. The contemporary work, which was featured in 2015 at the Museion museum in Bolzano, Italy, was supposed to represent the consumerism, corruption, and scandalous affairs that punctuated the 1980s Italian political scene.

Hours after its installation, Goldschmied and Chiari’s piece was swept into the garbage by the museum’s custodial staff, who mistook the “avant-garde” exhibit for trash.

Art is a classic case of “one man’s trash is another man’s treasure,” explains Ellen Winner, a Professor of Psychology at Boston College and author of Invented Worlds: The Psychology of the Arts. What is aesthetically pleasing or emotionally moving to her may be unsightly or mundane to you. But despite these subjectivities, the label of “art” carries a universal power for the viewer, whether you attach it to a pile of trash or an AI-generated image.

“Our mind decides whether something is art. Our mind does not decide whether something is a triangle,” Winner says. When you encounter a new creative piece, your brain evaluates it: What do I like about this? What do I hate? What was the intentionality of the artist, and what does this piece mean to me? The process is sometimes instantaneous, sometimes drawn-out. In either case, when you call something art, you make a statement not just about its aesthetics but also about the unique relationship between the “something,” its creator, and yourself.

This process is also what causes artwork to transform from paint splashed onto canvas into a vessel for an artist to communicate with their audience. As Winner describes, “You feel you’re in communion with the artist’s mind.”

So, what happens when that communication is mediated by an algorithm that doesn’t fully understand the uncodeable meanings that underlie a piece of art?

“When people look at an image that they think is by Rembrandt, and then they find out that an AI program actually created it, they will be fascinated. It’s a conversation piece. But they will actually like it less as a work of art,” Winner says. “It’s almost a mystical thing: the artist left his or her essence on the paper or on the canvas. But if you know that a computer created it, you’re not communing with anybody’s mind, so it’s less emotional.”

In other words, AI-generated art is fresh and flashy. But it might not be as evocative as art created solely by human hands. After all, would you choose to go to a museum that had real, original Rembrandts or a museum that had digitized, imitation Rembrandts created by a machine?

“You feel you’re in communion with the artist’s mind.” — Ellen Winner

Winner and I examine Portrait of Edmond de Belamy, the original infamous algorithm-generated print. In his 27 ½ x 27 ½ inch gilt frame, Edmond judges you in a dark frockcoat and plain white collar. His smeared facial features have been painted into indistinct but still overtly disapproving oatmeal-shaded streaks. The clearest brushstrokes are from the AI artist’s signature in the bottom-right: minGmaxDx[log(D(x))]+z[log(1-D(G(z))].

I run through Winner’s psychology of art questions, scrawling my answers on a scrap of loose-leaf paper:

  1. What do I like about this? Edmond’s frockcoat is very gentlemanly. I also enjoy the painting’s simplicity.
  2. What do I hate? Pretty much everything else. The painting looks cut-off, and there are blank spots all over the canvas.
  3. What was the intentionality of the artist? (I leave this one blank).
  4. What does this piece mean to me? If a human artist created this, I would dismiss it as bad art. But I’m fascinated by the AI element.

Days later, I present my answers to Richard Lloyd, the International Head of Prints and Multiples at Christie’s, the auction house that sold Portrait of Edmond de Belamy. He raises one meticulously plucked eyebrow when I present my crumpled, slightly stained list.

“Everybody has their own definition of a work of art,” he says. “I’ve tended to think human authorship was quite important — that link with someone on the other side. But you could also say art is in the eye of the beholder. If people find it emotionally charged and inspiring then it is.”

“What do you think the artist’s intentionality was?” I probe.

Lloyd sighs. “Let me put it this way, if it waddles and it quacks, it’s a duck.”

Like Winner, Lloyd believes AI will lay bare fundamental questions about art and creativity. However, while he acknowledges emotional evocation is a central theme within artistic creations, he also argues that art will never be fully defined: it’s impossible to distinguish all things we do and do not call art. In any case, artists and viewers are continually challenging categories of what counts as art, making the concept impossible to close.

Just as there is no litmus test to decide whether something is or isn’t art, it’s unclear what differentiates an artist from an impostor with a Crayola kit and a canvas. And that ambiguity has only been intensified by user-friendly algorithms for art creation.

AI promises to democratize art: anyone with access to machine-learning software can explore the complex web of art history and even create new works of art, all without an advanced degree or intensive training. spotlights this accessibility component. Its homepage features a headline etched in neat, block letters, unmissable against the stark white background: “Harness the power of artificial intelligence to expand your imagination and productivity, without learning how to code.”

Certainly, these platforms also attract traditional artists like Katya Grokhovsky, a classically trained NYC artist and curator who specializes in installation work. In 2020, Grokhovsky’s art studio shut down due to the COVID-19 pandemic, leading her to experiment with digital painting. “AI integration felt like a natural progression,” she recalls.

Grokhovsky met Elgammal shortly after and began working as a Artistic Fellow, incorporating the platform’s developing computational systems into her installations and sculptures. She identifies her AI system as a collaborator, not as a mere artistic tool. In conversations with other artists, she praises her “assistant,” referring to the AI by name: AISSA.

In the morning, Grokhovksy gives AISSA a task (for instance, sketching out a drawing amalgamated from dozens of image inputs). At 5 PM, she checks in. “Look what you did!” Grokhovsky exclaims, examining AISSA’s final product.

FANTASYLAND, a mixed-media installation constructed by Grokhovksy and AISSA, features recycled parachute canopies, inflatable beach balls, and a wallpaper collaged with AISSA-created images. The scattered objects and prints are kaleidoscopic, beaming with vibrant colors. The project, a culmination of Grokhovksy’s work with the team, is whimsical and coated in an alluring veneer of exuberance.

But other collections look like shit. In fact, user PoopsGan inputs photographs of smeared, splattered bird poop to create a collection of abstract, literally-shitty splotches of white on a granite background.

User ParhamGhalamdar uploads wombs — medical diagrams, ultrasounds, and cartoonish drawings. Another user coalesces black-and-white photographs of 90’s female rockstars, generating distorted humanesque forms illuminated by strokes of light and shadow. Baby-alien creator ChrisMcKellen showcases his second project, “Satanic Elite,” which merges roaring hellfires with headshots of famous figures, from Barack Obama to Mitch McConnell to Meryl Streep.

Elgammal calls each of these users an artist, but Grokhovsky isn’t so sure.

“You have a phone, and I have a phone. We all take pictures. That doesn’t make us photographers,” Grokhovsky shrugs. She believes that serious art requires a lifetime of practice and training. “If you ask me whether this bird shit is art, I’d say ‘absolutely not.’”

Interestingly, AI’s promise of artistic accessibility may be more threatening to the art world than AI itself. For one, AI’s ability to create thousands of new, unique images at the touch of a button challenges the principle of scarcity that gives art some of its value.

“Is there a fear that these platforms inspire a cheapening of art? Of course,” Grokhovsky says. None of her work, including her AI-generated prints, can be found on’s public web pages. “My work is limited, it’s original, and it’s expensive. Once artwork becomes massively reproduced online, its value — and not just financially — can dramatically decrease, especially when it’s not just a human author involved.”

I reflect on my first conversation with Elgammal, remembering his childlike giddiness as he swiveled in his armchair and scrolled through the collections. “Anyone can try it!” He said. “Anyone can be an artist.”

I scrounge up 30 pictures of myself. A self-portrait I drew when I was 6, where I’ve given myself ramen noodle hair. Photographs of me from two Halloweens: in one I’m 12 and wearing an age-appropriate Spongebob Squarepants costume; in the other, I’m 19 and dressed as sexy Patrick Star. My driver’s license photo. A selfie of me and my high school boyfriend.

Thirty is the minimum number of inputs needed for’s Freeform training algorithm. So, I upload the 30 snapshots of my life to the interface and click “BEGIN TRAINING.” A pop-up window instructs me to return in two-and-a-half hours to view my completed collection. The next morning, I log onto my account, eagerly anticipating my AI-generated portraits.

The outputs look as shitty as PoopGan’s bird poop collection.

In several illustrations, my head has been squashed to resemble a sat-on sandwich. Another image features my skull inflated into a mushroom-shaped structure.

But once the initial shock (and insecurity) subsides, I re-evaluate the outputs. What do I like about this? The colors are serene: tranquil blues and sunflower yellows flit across the background. What do I hate? I’d prefer my head to not be digitally mangled. What was the intentionality of the artist, and what does this piece mean to me? These are fragments of me, pockets of peace peppered throughout my iPhone photo album.

So, I change my inputs. In this new collection, I attempt to keep my head size consistent across photographs, zooming in and out on certain images to rescale my face. The colors are more deliberate too. I’ve added filters to some inputs, tinting them with sunshine- and sky-hues.

Another two-and-a-half hours pass.

Most of the revised outputs are still deformed (the fourth image depicts my crackly lips seemingly wrapping around an eyeball). One output has me stop.

It’s peculiar, for sure. The outline of my face appears to be stacked on top of two other faces, and the right side of my head fades into the grey-blue backdrop. My would-be torso is cloaked in swirls of beige and canary yellow. But amid all the hazy shapes and vague contours, I can see a distinct smile.

Am I the artist? I’ve curated the inputs, selected which algorithmic model to use, and reflected on the final product. And what if I were to re-create this image with charcoal pencils or oil on canvas? Would it transform from an AI’s automatic output into a creative piece of art?

I print out the image and tape it to my bedroom wall.



@Harvard Sociology and Neuroscience. Health and Science Writer. Words in The Guardian, The Lancet, Discover Magazine, New England Journal of Medicine.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Lucy Tu

@Harvard Sociology and Neuroscience. Health and Science Writer. Words in The Guardian, The Lancet, Discover Magazine, New England Journal of Medicine.