This article was written as a final paper for Human-AI Interaction with Chinmay Kulkarni and Mary-Beth Kery at Carnegie Mellon University.
Machines are learning to perceive, understand, and communicate, putting long-held paradigms in fields like neuroscience and philosophy under renewed scrutiny. Labor, politics, ethics, and other fields relevant to our daily lives are set up for a period of drastic change, owing to underlying changes in the philosophical tenets that govern our societies. With an AI-powered future encroaching, one hopes that AI will be used for good, rather than a weapon used to enforce disastrous cycles of privilege and oppression. If one buys that artists have the power to incite a cultural paradigm shift, it becomes clear how important it is for the general public — especially creatives — to understand AI and its cultural implications. In this article, I want to discuss some of the artists working at the forefront of AI art and examine the significance of their work as it relates to three core themes: authorship, society and economics, and intelligence and cognition.
To establish terminology, the term ‘artificial intelligence’, or AI, refers to a machine that demonstrates ‘intelligent’ behavior. As our understanding of human intelligence changed throughout the 20th century, so too did researchers’ approach to creating AI. Cognitive scientists of the 60’s believed the mind used specific rules and schemas in order to understand the world and choose an appropriate output, so AI researchers designed ‘expert systems’ programmed to know as many rules and schemas as possible. Nowadays, many scientists have adopted a more connectionist outlook on intelligence. A human child starts out completely naive and learns about the world through examples, forming a mental model of the world, a statistical model at its heart. Hence ’machine learning’ (ML), an AI technique where a model learns from examples rather than thousands of memorized rules¹. One type of ML model, called an artificial neural network, is designed with an architecture analogous to the human brain. A series of computational units called neurons produce chains of activation through their connections to neurons in other layers, causing a natural organization to arise and culminating in the recognition of patterns². ML models like neural networks refine themselves through exposure to data. For example, a model meant to help investors choose high-performing stocks would be trained on price fluctuations in millions of stocks throughout history, thus picking out patterns undetectable to the human eye.
From a reductionist perspective, an artificial neural network functions through advanced applied math. So how could this be considered intelligence? Amazingly, the human brain is also a series of computational units: a biological neuron sums electrical input from other neurons, and fires an action potential if a threshold is reached, activating yet other neurons to which it is connected. This is the basis of perception, emotion, reasoning, and all other brain features which humans hold so dear.
By training neural networks on carefully curated data and selecting or adjusting the algorithms used, artists can exploit their pattern recognition capabilities to make work ranging from the surreal to the sublime. In this review, I use the terms ‘AI art’ and ‘ML art’ somewhat interchangeably, because all the art I discuss here uses machine learning instead of other types of AI. However, it should be noted that some AI art does not use ML techniques. This can include any preprogrammed expert system designed to complete tasks or interact with humans. Some artists working with machine learning prefer the specificity of “ML art”, a technical term that alludes to technique’s roots in statistics, over “AI art”, which can evoke common misconceptions about AI. Finally, though there exists a rich lineage of artwork made with or about robots (i.e. machines imitating human functioning while existing in physical, non-virtual space), I have excluded many of these excellent pieces unless they also have a strong ML focus.
Understanding an artistic medium and its limitations is always helpful to better appreciate, say, an oil painting from the Renaissance. But appreciating and criticizing machine learning art is especially difficult without context, because the significance of work in emerging media draws more heavily from the nature of the medium itself. There are many constraints to working with ML, which can involve collecting thousands of images for a dataset, securing access to an expensive computer with a top-of-the-line graphics card, or wading through complicated code (which may lack a UI entirely because it’s still under development.) But thanks to the hard work of ML artists and educators worldwide, new developments in ML are typically accompanied by new ways to use pretrained models online or with relatively little coding. I’ve included plenty of links to online interfaces for the tools I describe below.
One of the first ML techniques to be used for artistic purposes was Style Transfer. Developed in 2015, this algorithm is trained on images possessing a particular visual ‘style’ — think Van Gogh’s whirling brushstrokes or Pointillist dots. Specifically, ’style’ encompasses the high-frequency features of an image like stroke weight, color scheme, and texture, regardless of subject or composition³. After training, that style can be successfully applied to an image in a different style, and a user can create Van Gogh-ified images of their pet dog to their heart’s content. Machine learning artist Gene Kogan’s video Why is a Raven Like a Writing Desk? demonstrates an expansive range of Style Transfer implementations⁴. Though this algorithm set the stage for the rich variety of ML art techniques available today, ML art as a whole has largely moved on. Though it’s fun to use, Style Transfer is like a Photoshop filter, rarely creating something truly unexpected.
Another early algorithm for image-making using ML is DeepDream. DeepDream was initially created as an explainability tool: a way to understand what image classification models were picking up on when they mistakenly labeled a baboon as a Welsh Corgi. The algorithm is applied to individual layers in a deep neural network, tweaking the input image until it produces maximum activation for one or more desired layers. If one of these layers, for example, is tuned to recognize dogs, dog faces will emerge from the output in unexpected ways — a psychedelic puppyscape. In the words of its creator, Alex Mordvintsev, when DeepDream is used artistically it starts to “do things it is not designed for, like detect some traces of patterns that it is trained to recognize and then trying to amplify them to maximize the signal of the input image.” DeepDream images are known for producing strange animal conglomerations art and ML enthusiasts call ‘puppyslugs’. Mordvintsev explains that “because ImageNet dedicates a lot of its capacity to dog breeds, it triggers a strong bias in the data”⁵. Training on other datasets, like MIT’s Place Image data, results in architectural landscapes as seen in the DeepDream work of Gene Kogan⁶.
Not long after techniques like Style Transfer and DeepDream made it possible to create art with ML, the 2014 release of an influential paper⁷ outlining a new architecture — the Generative Adversarial Network (GAN) — revolutionized the field. A single convolutional neural network can perform tasks like classification, categorization, or prediction. A GAN, however, is comprised of two CNNs: a generator and a discriminator working in tandem, playing an eternal game of algorithmic Tom and Jerry (hence the ‘adversarial’.) The discriminator, a typical classification network, is trained to discriminate images of a particular category (for example, ‘cat’.) Meanwhile, the generator iterates on random noise, finessing the meaningless noise image into something that looks enough like a cat to fool the classifier. As the discriminator gets better at telling the generator’s fake cats apart from the real cats in its dataset, the generator gets better at making counterfeit kitties. The quick adoption of GANs in the ML community produced tiny, blurry pictures of five-legged, three-eyed dogs (the discriminator having determined that was an acceptable dog image) that fascinated many artists⁸. A deep convolutional GAN (DCGAN), the simplest form of the algorithm, can be used to generate new images that fit a given training category, and interpolate between them to produce near-infinite variations. Several of the works I discuss in this review use Pix2Pix, a technique with a more complex design where a GAN is trained on paired representations A and B of the same image and learns to translate ‘pixel to pixel’ between them⁹. For example, Christopher Hesse’s innovative Edges2Cats implementation of Pix2Pix takes in a line drawing (series of edges) of any form, and generates an odd, squashed yet convincingly photorealistic cat in that shape¹⁰. A more sophisticated GAN called BigGAN requires massive amounts of time and computational power to train, but produces more convincing and higher-resolution results than DCGAN¹¹. Finally, researchers at NVIDIA released StyleGAN in February 2019, which has further raised the bar for generated imagery, especially faces¹². A StyleGAN demo/art project created by developer Philip Wang, thispersondoesnotexist.com, inspired offshoots like Christopher Schmidt’s thisrentaldoesnotexist.com and Janelle Shane’s curated collection of StyleGAN cats¹³. Carnegie Mellon BCSA alumnus Joel Simon created Artbreeder (formerly known as GANbreeder)¹⁴, a frontend for BigGAN and StyleGAN where users can interpolate categories and adjust weights without writing a single line of code (combining, for example, a Pomeranian puppy with a cotton ball to make an indescribable puffy thing.)
The methods described above are not the only ones available to make art with machine learning. Recurrent Neural Networks (RNNs) allow for the creation of sequential images, and are therefore popular among artists making time-based work, like theater, dance, or animation¹⁵. The text generator GPT2 can respond to a written prompt with surprisingly realistic grammar and matching syntactical style, providing a basis for generated poetry¹⁶. Most of the work I discuss here uses GANs, so for the sake of time and relevance, I won’t explain sequential techniques in detail.
II. What can AI art teach us about authorship?
AI art has already generated what seems to be more than its fair share of controversy, provoking questions about authorship, copyright and legal issues. But even the most sophisticated AI today is far from artificial general intelligence (AGI), and current neural networks can only simulate isolated functions of the human brain (like visual perception) in highly controlled conditions. It would be not only hyperbolic, but flat-out erroneous, to say of AI art that ‘a robot decided what to paint and painted it.’ This mischaracterizes the efforts of dozens of humans who contributed to the software components, dataset, and implementation of the project¹⁷. In short, AI is not ‘making art’, humans are making art with AI.
But if current AI is merely a tool and not a co-creator, what’s all the fuss about? It turns out more of the controversy comes from the idea of ‘derivative work’, an idea intrinsic to copyright law, which computational art — especially ML art — is constantly calling into question. “The number of people who can potentially be credited as coauthors of an artwork has skyrocketed,” explains writer Jason Bailey¹⁸. In 2018, the French art collective Obvious trained a GAN on a dataset of historical art that a young American researcher, Robbie Barrat, had scraped and curated for his own GAN. Obvious used their model to generate what some called ‘the world’s first painting made by AI’, titled Portrait of Edmond de Belamy, but never acknowledged the contributions of Barrat or any of the researchers who had developed GANs in the first place. When Portrait was sold for over $400,000 at the art auction house Christie’s, many ML artists and researchers were outraged. When artists use tools and techniques from research communities with a long history of proper citation and acknowledgement, it’s important to be respectful.
The pot of art-world controversy was further stirred when artist Alexander Reben began a project where he sourced images from GANbreeder (now Artbreeder,) which Reben believed to be randomly generated but were in fact made by other Artbreeder users. Artbreeder allows users to creatively ‘remix’ generated imagery, using the metaphor of “crossbreeding” from genetics. The final stage of Reben’s project involved commissioning painted replicas of the images from a ‘painting factory’ in China to question the perceived value of an original image (its “aura” in the words of Walter Benjamin¹⁹.) This thesis took a surprising turn when another artist, Danielle Baskin, complained that one of Reben’s images was an artwork Baskin had spent hours making on Artbreeder herself. Eventually, the two artists reached an agreement, but Artbreeder creator Joel Simon has revised the interface to include more attribution to an image’s digital lineage. In Bailey’s discussion with cyberlaw attorney Jessica Fjeld, the two concluded that artists working with ML have to reckon with a new era of copyright law, especially fair use and ‘implied copyright’ when using interfaces designed to promote collaboration through minute iterations.
III. What can AI art teach us about the society we live in?
Some artists use the data-processing power of machine learning to address questions about the individual’s relationship to society, and how that society is shaped by power dynamics. Perhaps the best known among these is Anna Ridler, whose piece Mosaic Virus explores the relationship between the natural and the artificial, and between humans and the capitalist society we have created, with a grid of generated tulips whose stage of life fluctuates based on the price of Bitcoin²⁰. Ridler’s piece references the phenomenon of ‘Tulipmania’ that took place in the Netherlands in the 1600s, when an economic bubble arose around a beautiful but useless commodity (tulips) and left many traders financially ruined when the bubble finally, dramatically popped²¹. Ridler collected her own data: 10,000 photos of tulips in all colors and patterns, making Mosaic Virus somewhat of a rarity in a sea of models trained on widely-used datasets like ImageNet. Thus, Ridler’s “painstaking human labor counteracts the supposed “objectivity” of artificial intelligence”. Through her careful process of curation and its delicate, ephemeral subject matter, Ridler breaks down the dichotomy between craftsmanship and automation and comments on the future of labor.
Critics of AI implementation are often concerned about the potential of AI to propagate human bias, reinforcing hegemonic beauty standards and the homogenization of society. Artist Sey Min, who specializes in data visualizations, created her project Overfitted Society by training a convolutional neural network to classify webcam input of participant’s faces, according to how well they fit beauty standards. According to the artist, “‘Produce 101’ is a popular K-pop TV show where one hundred one young participants [compete to become] a K-pop idol star. Each participant received a grade from A to F according to the votes from the viewers every week, and at the end of show, only top ten survivors debuted as a team.” The project questions beauty standards that reject diversity in favor of certain races, weights, facial structures, and even social signifiers like wealth. Min’s project replaces fault-ridden human decision-making with the supposedly ‘flawless’, data-driven logic of a trained model, thus also raising questions about the future of a society where the power to make societal decisions, i.e. who is accepted and who is excluded, is granted to AI. Min intended her project to be fun and lighthearted, while also inciting viewers to “think about our social bias and how we can avoid the overfitted society where the lack of diversity is ignored²²”. The artist’s own face received a C (on an A to F scale), indicating that an idol career may not be in her future. A project with similar, albeit more disturbing, implications was Trevor Paglen and Kate Crawford’s project ImageNet Roulette, where users could upload their photo and be classified in accordance with the ‘People’ category of ImageNet. Paglen and Crawford didn’t just want to reveal which disturbing categories and outdated slang lurked in the depth of ImageNet, which is used to train models for commercial and research uses alike. They also wanted to demonstrate the tenuousness of a society that constantly takes image and superficiality for granted, thinking that representations and labels are innate instead of learned. Labels have political significance, too: in the artists’ own words, “Representations aren’t simply confined to the spheres of language and culture, but have real implications in terms of rights, liberties, and forms of self-determination²³.”
Gene Kogan, already mentioned several times in this review, is also interested in human society, as evidenced by his recent work involving civilizations and their politics. In Invisible Cities, Kogan and his collaborators used Pix2Pix to create a series of photorealistic satellite maps generated from the input of a multicolored graphic denoting buildings, bodies of water, roads, and the like²⁴. Though the training data relied on graphics paired with the corresponding real-life satellite imagery, the final result could take in any sketch using the designated colors and create a convincing ‘map’ of this imagined landscape. Kogan cites a passage from Italo Calvino as inspiration: “Cities, like dreams, are made of desires and fears… everything conceals something else”. Cities, accumulations of human dwellings and infrastructure, encode information about how humans live, and the highly regulated architecture of modern cities is reflected back in Kogan’s imaginary ones.
Another project by Kogan, entitled Meat Puppet, pointedly questions the phenomenon of fake news and political figureheads by applying Pix2Pix to video input from a webcam²⁵. Specifically, Kogan trained a model on footage of Donald Trump, and applied it to webcam input of himself as he made silly faces, generating a shockingly lifelike, somewhat grotesque video of Trump smiling, laughing, and grimacing in time. Though the project didn’t have an audio component, it’s easy to imagine how it could be exploited to make convincing fake videos in the current sociopolitical environment, when many American citizens scorn reputable media in favor of extremist, click-mongering platforms. Kogan began this work in early 2017, before the highly controversial phenomenon of ‘deepfake’ political videos and pornography began to appear on the internet. Any dedicated machine learning enthusiasts would be capable of implementing Pix2Pix for video, so it’s not necessarily the case that Kogan set a direct precedent for the originators of deepfakes, though his work may have been an influence. Since these first scandals, deepfake video has been banned from some prominent social media platforms such as Twitter, as critics warn the technique has a potential for consequential, widespread abuse in political smear campaigns or ‘revenge porn’²⁶. It’s not clear if Kogan intended his original work as political commentary, or if it was merely an exercise with unintended reverberations in the ML art community and beyond. Regardless, Meat Puppet shows how crucial it will soon become for media consumers to understand the power of machine learning, and identify works made with it, ‘artistic’ or otherwise. But more generally, deepfakes demonstrate how, in the words of the theorist Grimes, “biology is superficial / intelligence is artificial²⁷.” It won’t be long until the human brain can control the appearance of its ‘meat puppet’ at will, at least in digital space. Is humanity prepared for this future?
IV. What can AI art teach us about being human?
As humanity’s best attempt yet at simulating a conscious being, artificial neural networks are a powerful tool for questioning all of cognition — the workings of our brain based on heuristics, biases, and memories. Thus, it comes as no surprise that many artists using ML make work about intelligence and cognition. As we teach AI to think like a human, what can AI teach us about being human? Within the broader theme of consciousness, sub-disciplines of cognitive neuroscience including object recognition and memory are addressed, as well as collective knowledge and ‘greater’ consciousnesses.
In the human brain, simple sensory inputs of line, color, and motion converge in higher visual cortices. Inputs are parsed into discrete objects and bound into a unified perception, activating connections to familiar faces, places, and categories of objects stored in temporal cortex and the hippocampus. At the lowest level, image processing in computers works the same way, where a particular combination of edges and hues generate activation for a familiar category — ‘that’s a cat’. But even a dataset of 100,000 labeled images can’t match the flood of data that inundates a human child at every moment, allowing children to learn faster and more thoroughly than any current AI. But errors in algorithmic visual perception can be remarkably similar to those made by humans: take the case of pareidolia, as shown by the project Cloud Face by Korean art collective Shinseungback Kimyonghun²⁸. When it comes to categorizing and demarcating the world, humans might ask ourselves if the world is made not of objects (like Plato once argued,) but of facts and beliefs. What if the categories we use to delineate our everyday experiences are not innate but learned? The same collective poses this question in their project Animal Classifier, which displays images it has classified into one of 14 bizarre categories — including ‘belonging to the emperor’ or ‘that from a long way off look like flies’ — drawn from a Borges story about language²⁹. Language shapes perception more than one might expect.
Unintentionally on the part of researchers, the data fed to an untrained ML model typically only represents a certain subset of real-life data (consider facial recognition algorithms that were trained almost entirely on white faces, and thus struggled to recognize people of color³⁰). Tom White’s series The Treachery of ImageNet shows how data, rife with biases and unintentional gaps or limitations, can mislead AI to make inaccurate predictions about the world. White algorithmically abstracted images of common objects, like a toilet or electric fan, to the point that they were unrecognizable to humans. But the images were abstracted such that even a trained classification network, like VGG19, still identified the meaningless design as an image of the original object³¹. The captions and the title of the piece refer to René Magritte, the French surrealist known for his painting The Treachery of Images. Magritte painted his famous pipe at a time when advances in art theory and the advent of photography were calling in to question everything Western society thought it knew about images. Nearly a century later, but in a similar era of image-driven confusion, Magritte’s statement on the fallibility of perception has become relevant once again.
Finnish artist Memo Akten sees ML art as a way to “reflect on how we make sense of the world”,³² proving that jumping to conclusions about an object’s potential is a limitation of human perception that need not necessarily be built into AI. Akten’s video Learning to See demonstrates a Pix2Pix model trained on beautiful landscape images, and tested on mundane desk-top scenes involving cords, office supplies, and other everyday items. As the artist notes in the caption, “It can only see what it already knows, just like us.”³³ Our perception of daily life is limited by what we know and what we’ve experienced. Certain settings may always incite boredom and despair, but as Akten’s video shows, there is potential to find beauty and inspiration even in the winding shape of an electrical cord.
Instead of simulating a single human being, some artists see AI as embodying the knowledge of a ‘greater’ consciousness, perhaps a being aware of the ‘meaning of life’. Gene Kogan’s work A Book From the Sky references Xu Bing’s series of the same title, where Bing painstakingly wrote thousands of nonexistent Chinese characters using only existing radicals³⁴. The title is a translation of a Chinese term that once referred to divine texts, but has evolved into a euphemism for gibberish. Kogan reinterprets this confusion of godly/profane and meaningful/meaningless, using a DCGAN trained on a database of handwritten characters to generate the model’s best emulations of them. Kogan also demonstrates how the latent space between characters reveals “imaginary characters which are interpolated from in between real ones, perhaps corresponding to semantically intermediate concepts”³⁵, suggesting that machine learning could reveal hidden knowledge a human would not think to seek.
Mario Klingemann’s work with GPT2, a language model, comments on what humans collectively derive as the ‘meaning’ of life. Klingemann trained GPT2 on famous quotes and used T-SNE (a statistical technique used to cluster data) to process the results, producing a map with words like “god”, “death”, “beauty”, and “money” displayed largest (indicating the model deemed them most indicative of the ideas expressed in famous quotes³⁶). Later, Klingemann realized that the he had accidentally “purged ‘love’” when using another, simpler algorithm to remove common words like ‘the’ and ‘has’ from the map³⁷. Though this oversight was coincidental, it makes an interesting point about how algorithms and humans sometimes disagree on what is salient and important. In the future, as the cliché goes, researchers may be teaching AI about the meaning of life. But perhaps, an AI trained on famous quotes and literature has something to teach us about the meaning of life, considering it has the potential to be more well-read than any individual human.
Several artists have experimented with generated faces to examine familiarity and how memory falters over time. Another work by the prolific Klingemann, entitled Memories of Passerby, uses a GAN to generate infinite images of imaginary people³⁸. Drawing from its training on a dataset of Old Master paintings, the GAN produces somewhat distorted hypothetical faces, often with a soft, Mona Lisa smile, that register somewhere between a remembered face and one seen in a dream. Mike Tyka made a similar project titled Portraits of Imaginary People, albeit training on high-resolution photographs of people instead of paintings³⁹. The emotional effect of a forgotten or imagined person is reminiscent of the mechanism of human memory, where a given memory is minutely transformed and degraded every time it is recalled, until the point where it becomes unrecognizable. While Memories of Passerby I was trained on historical artwork to generate masculine and feminine portraits in real time, Klingemann’s later work Uncanny Mirror introduces the element of viewer interaction to create an ‘algorithmic mirror.’⁴⁰ The program extracts face data from the viewer and uses their facial structure and pose as a basis on which to “dream,” generating surreal faces that echo some elements of the viewer’s movement while replacing others with the features of strangers. Finally, Circuit Training lets the viewer contribute to training the algorithm, rather than just volunteering the test input⁴¹. Participants can not only provide images of themselves as training data, but help the AI curate ‘interesting’ images that should be prioritized in learning; thus ideally generating an image that is aesthetically interesting and novel. By “incorporating the value judgments” humans make, the AI learns to ‘be human.’
While the timespan of technological development is always unclear, I believe that artificial general intelligence (AGI) — an AI as intelligent, or more so, than a human, and which can think and feel like one — will be developed within my lifetime, unless humans intentionally decide not to. Thus, the topic is of extreme urgency for artists. Artwork made with or about artificial intelligence shows what AI can teach us about society and about ourselves: as conscious beings with sophisticated brains, we tend to take for granted simple cognitive capacities like perception, memory, and language, among other things.
Generated imagery has the potential for exploitation in the form of deepfakes and other techniques as of yet identified by malicious actors. And ML is far from accessible to everyone, meaning there’s a power imbalance in who can create art with it. Keeping this in mind, ML art can also allow us to understand ourselves and the world around us better.
- “Artificial Intelligence (AI) vs. Machine Learning vs. Deep Learning.” Skymind, skymind.ai/wiki/ai-vs-machine-learning-vs-deep-learning.
- “A Beginner’s Guide to Neural Networks and Deep Learning.” Skymind, https://skymind.ai/wiki/neural-network.
- Gatys, Leon A, et al. “A Neural Algorithm of Artistic Style.” ArXiv, 26 Aug. 2015, arxiv.org/abs/1508.06576.
- Kogan, Gene. “Why Is a Raven Like a Writing Desk?” Vimeo, 13 Sept. 2015, vimeo.com/139123754.
- Bailey, Jason. “DeepDream Creator Unveils Very First Images After Three Years.” Artnome, 2 Jan. 2019, www.artnome.com/news/2018/12/30/deepdream-creator-unveils-very-first-images-after-three-years.
- Kogan, Gene. “Neural Synthesis.” Gene Kogan, 2017, genekogan.com/works/neural-synth/.
- Goodfellow, Ian J, et al. “Generative Adversarial Networks.” ArXiv, 10 June 2014, doi:arXiv:1406.2661.
- Goodfellow, Ian J. “NIPS 2016 Tutorial: Generative Adversarial Networks.” GroundAI, 31 Dec. 2016, www.groundai.com/project/nips-2016-tutorial-generative-adversarial-networks/.
- Isola, Phillip, et al. “Image-to-Image Translation with Conditional Adversarial Networks.” ArXiv, 21 Nov. 2016, doi:arXiv:1611.07004.
- Hesse, Christopher. “Image-to-Image Demo.” Affinelayer.com, 19 Feb. 2017, affinelayer.com/pixsrv/.
- Schwab, Katharine. “A Google Intern Built the AI behind These Shockingly Good Fake Images.” Fast Company, 2 Oct. 2018, www.fastcompany.com/90244767/see-the-shockingly-realistic-images-made-by-googles-new-ai.
- Karras, Tero, et al. “A Style-Based Generator Architecture for Generative Adversarial Networks.” ArXiv, 12 Dec. 2018, doi:arXiv:1812.04948.
- Shane, Janelle. “GANcats.” AI Weirdness, 7 Feb. 2019, aiweirdness.com/post/182633984547/gancats.
- Simon, Joel. Artbreeder, artbreeder.com.
- “A Beginner’s Guide to LSTMs and Recurrent Neural Networks.” Skymind, skymind.ai/wiki/lstm.
- Radford, Alec, et al. “Better Language Models and Their Implications.” OpenAI, OpenAI, 14 Feb. 2019, openai.com/blog/better-language-models/.
- Bailey, Jason. “The AI Art At Christie’s Is Not What You Think.” Artnome, 14 Oct. 2018, www.artnome.com/news/2018/10/13/the-ai-art-at-christies-is-not-what-you-think.
- Bailey, Jason. “Why Is AI Art Copyright So Complicated?” Artnome, 27 Mar. 2019, www.artnome.com/news/2019/3/27/why-is-ai-art-copyright-so-complicated.
- Benjamin, Walter. The Work of Art in the Age of Mechanical Reproduction. Penguin Books, 2008 (originally 1936.)
- Ayers, Elaine. “Using AI to Produce ‘Impossible’ Tulips.” Hyperallergic, 1 Mar. 2019, hyperallergic.com/487261/anna-ridler-tulipmania/.
- Wikipedia contributors. “Tulip mania.” Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 25 Sep. 2019. Web. 15 Dec. 2019.
- Min, Sey. “Overfitted Society.” Ttoky, 2018, https://www.ttoky.com/produce_101/index_101.html.
- Crawford, Kate, and Trevor Paglen. “Excavating AI: The Politics of Images in Machine Learning Training Sets.” Excavating AI, 2019, www.excavating.ai/.
- Kogan, Gene. “Invisible Cities.” Opendotlab, opendot.github.io/ml4a-invisible-cities/.
- Kogan, Gene. “@Genekogan on Twitter.” Twitter, 28 Apr. 2017, twitter.com/genekogan/status/857922705412239362.
- Shinseungback Kimyonghun, http://ssbkyh.com/works/cloud_face/
- Shinseungback Kimyonghun, http://ssbkyh.com/works/animal_classifier/
- Lohr, Steve. “Facial Recognition Is Accurate, If You’Re a White Guy.” The New York Times, 9 Feb. 2018, www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html.
- White, Tom. “The Treachery of ImageNet.” Dribnet, dribnet.bigcartel.com/category/the-treachery-of-imagenet.
- Zachariou, Renée. “Machine Learning Art: An Interview With Memo Akten.” Artnome, Artnome, 16 Dec. 2018, www.artnome.com/news/2018/12/13/machine-learning-art-an-interview-with-memo-akten.
- Akten, Memo. “Learning to See.” Memo.tv, 2017, www.memo.tv/portfolio/learning-to-see/.
- “Xu Bing | Book from the Sky | Ca. 1987–1991.” Metmuseum.org, The Metropolitan Museum of Art, 2000, www.metmuseum.org/art/collection/search/77468.
- Kogan, Gene. “A Book from the Sky 天书: Exploring the Latent Space of Chinese Handwriting.” Genekogan.com, 15 Dec. 2015, genekogan.com/works/a-book-from-the-sky/.
- Klingemann, Mario. “@Quasimondo on Twitter.” Twitter, 19 Mar. 2019, twitter.com/quasimondo/status/1107967907999465475.
- Klingemann, Mario. “@Quasimondo on Twitter.” Twitter, 19 Mar. 2019, twitter.com/quasimondo/status/1107981012422803458.
- Triano, Alberto, director. Memories of Passersby I by Mario Klingemann. Vimeo, Onkaos, 30 Oct. 2018, vimeo.com/298000366.
- Tyka, Mike. “Portraits of Imaginary People.” Ars Electronica Festival 2017, 2017, ars.electronica.art/ai/en/portraits-of-imaginary-people/.
- Triano, Alberto, director. Uncanny Mirror by Mario Klingemann. Vimeo, Onkaos, 16 May 2019, vimeo.com/336559940.
- Triano, Alberto, director. Circuit Training by Mario Klingemann. Vimeo, Onkaos, 28 May 2019, vimeo.com/338883309.