Dreaming of Electric Sheep

10000 Electric Sheep in a Book created by Artificial Intelligence

by Dr. Ernesto Diaz-Aviles, Co-founder and CEO at Libre AI

We are excited to announce that the book “Dreaming of Electric Sheep” is available at Amazon. All royalties will be used to support our initiatives on AI/ML literacy.

The book contains 10000 sheep drawn by Artificial Intelligence based on Machine Learning (AI/ML).

The machine, called Laila, was trained using tens of thousands sheep doodles made by real people [1]. After a while, Laila started to dream of sheep and created drawings that are as good (or as bad and sketchy) as the ones produced by humans :)


“Do androids dream?” 
― Philip K. Dick, Do Androids Dream of Electric Sheep?

From Artificial Artificial Intelligence to Artificial Intelligence

More than 12 years ago, Amazon coined the term artificial artificial intelligence for processes outsourcing tasks to humans, especially those tasks computers are (or were?) lousy at, e.g., identifying objects in a photograph, writing short product reviews, transcribing podcasts, watching a short film and then describing the emotions it elicits.

Crowdsourcing emerged as a new form of labor on demand [2] and Amazon Mechanical Turk (MTurk) provided the mechanisms to harness the power of a global workforce to complete tasks at scale. It was in this context where Aaron Koblin’s project, The Sheep Market, was born.

The Sheep Market is a collection of 10,000 sheep created by workers on Amazon’s Mechanical Turk. Each worker was paid $0.02 (USD) to complete the HIT (human intelligence task): “draw a sheep facing left.”

Back to 2018, AI/ML is currently showing impressive performance in fields such as computer vision and natural language processing. AI/ML is expected to impact many industries, helping us in our tedious tasks or taking our jobs, or something in between.

This book is an experiment that illustrates how certain crowdsourced tasks, e.g., drawing a sheep using MTurk’s artificial artificial intelligence HITs, can now be completed by machines achieving close to human level performance, which potentially could start taking away those micro-payments from human workers. Workers that, paradoxically, have helped machines to learn from the datasets created by completing micro tasks.

For example, in creating this book the machine, an AI/ML model, is trained using a dataset of sheep sketches crowdsourced from real people [1] and then is able to generate original sheep drawings on its own.

Page 0x00 of the book showing 100 sheep drawings that are as good (or as bad and sketchy) as the ones produced by humans :)

How to teach a machine to draw (electric) sheep?

We humans are imperfectly ingenious sketching sheep, remember The Sheep Market project? or have a look to some of the tens of thousands sheep doodles made by real people in The Quick, Draw! experiment.

“… if every person drew a perfect sheep they would all be the same and it would be a horrible project … it’s the little variations that give the character and make it interesting.” — Aaron Koblin, The Sheep Market project.

All such variants and diversity humans express when sketching a sheep would be extremely difficult to capture in a program written in Python, Java, or C++.

AI/ML to the rescue. If you want to teach the machine to draw a sheep using AI/ML, you won’t tell it to draw the head, then the body with fleece, the face, and eyes. You will simply show it thousands and thousands of drawings of sheep, and eventually the machine will learn and work things out. If it keeps failing, you don’t rewrite the code. You just keep training it.

For this book, the machine, called Laila, was trained using tens of thousands sheep doodles made by real people [1]. After a while, Laila started to dream of sheep and created drawings that are as good (or as bad and sketchy) as the ones produced by humans :)

Why, why, why are we doing this?

The answer to this question is multifaceted.

  • Experimentation with AI/ML of course, but also an exploration on creating something tangible, a book, generated by a machine. Let’s see if it becomes a best seller :)
  • Contrast the crowdsourcing disruption and the current AI/ML disruptive wave
  • l’art pour l’art and l’ia pour l’ia

Why Sheep?

More sheep than people living in Ireland.

We have been living in Ireland for a couple of years now and you can find more sheep than people in this beautiful island :) , cute sheep inspired the electric sheep dreamed by Laila, which are also made in Ireland with ❤

What do you do with a book containing 10000 electric sheep?

The possibilities are infinite :), here are some ideas:

  • You can count these electric sheep as a way to help you fall asleep, if you reach to 10K and you are still awake, just start over again from a random page
  • You can use the book as a coloring book for yourself or your kids (this is what we do ;) )
  • You can study the drawing skills of machines
  • You can give one as a present to your geeky friend and let her/him figure out what to do with it
  • You can practice your scissors cutting skills and cut 5K sheep (pages are printed on both sides)
  • You can buy a second copy and then continue practicing your scissors cutting skills and cut the other 5K sheep
  • You can use the pages as wallpaper (patent pending)
  • Feel free to add more ideas in your comments below ;)

How we did it?

We use an AI/ML algorithm named Generative Adversarial Networks (GAN) [3]. GANs are implemented by a system of two deep neural networks contesting with each other in a zero-sum game framework. The two players in the game are called Generator and Discriminator.

The Generator, as its name implies, generates candidates and the Discriminator evaluates them. The generative network’s training goal is to produce novel synthesized instances that appear to have come from the set of real instances, whereas for the discriminator, its goal is differentiating between true and generated instances.

You can think of an art analogy of an apprentice-master relationship, where from an empty canvas, the apprentice (Generator) will produce paintings that the master (Discriminator) will judge based on her experience and previous knowledge as real pieces of art or not.

The apprentice will get better and better based on the master’s feedback, until reaching a level where the master will not be able to discriminate between her apprentice’s work and a good piece of art.

The following figure illustrates our approach:

Drawing Electric Sheep with a DCGAN.

In a nutshell:

  • The Dataset. As training set we use the sheep dataset from The Quick, Draw! experiment.
  • Architecture. We use a Deep Convolutional Generative Adversarial Networks or DCGAN [4] to implement our approach.
  • Deep Learning Tools. We use Keras with TensorFlow backend for the model implementation and JupyterLab for prototyping.
  • Models. After the Generator and Discriminator models are trained, we save them independently.
  • From random noise to sheep. We then use the Generator to produce sheep drawings from random noise. The results are as good (or as bad and sketchy) as the ones produced by humans ;)

More details on how we did it can be found in this post: https://medium.com/@libreai/draw-me-an-electric-sheep-9a3e0b5fe7d5

~ Fin ~


About the author
Dr. Ernesto Diaz-Aviles is the Co-Founder and CEO at Libre AI. Twitter: @vedax


Notes

  • In the paper A Neural Representation of Sketch Drawings,by David Ha and Douglas Eck (2017). https://arxiv.org/abs/1704.03477 , a recurrent neural network (RNN) is used to construct drawings based on path strokes and not pixels. We instead used a GANs, since we think it helps the explanation based on the analogy of apprentice-master.
  • Our model uses pixel-level information for the images for training and for generating new instances. In order to improve the image quality, we convert the output bitmaps into vector graphics (SVG) using Potrace.

References

[1] Quick, Draw! Sheep Dataset. https://quickdraw.withgoogle.com/data/sheep ; data made available by Google, Inc. under the Creative Commons Attribution 4.0 International license.

[2] Jeff Howe. 2016. The Rise of Crowdsourcing. [https://www.wired.com/2006/06/crowds/].

[3] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems (NIPS’14)[https://arxiv.org/abs/1406.2661]

[4] Alec Radford, Luke Metz, Soumith Chintala. 2015
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. [CoRR abs/1511.06434]