How to Fake It As an Artist with Docker, AWS and Deep Learning

Luis Herrera Benítez
DevOpsion
9 min readOct 31, 2016

--

Co-authored with Álvaro Barbero, Docker ninja and Chief Data Scientist at Instituto de Ingeniería del Conocimiento (IIC)

“Good artists copy, great artists steal” — Pablo Picasso

In UK Channel 4 documentaries series “Faking it”, Paul O’Hare, a painter and decorator from Liverpool, was given just four weeks to transform himself into a fine artist and attempt to fool the critics at a London art gallery. We are going to show how to do it in less than half an hour and with a little help of Docker, AWS and Deep Learning, including the time you need to read this entry. And for less than $10.

A stroke of genius

In order to speed up your transformation, we are going to rely on an artificial intelligence system. This AI system is based on a Deep Neural Network that creates artistic images indistinguishable (we think) from the works of an artist. How is this achieved? By combining the content of one image — a portrait or a landscape photography — with the style of another image — typically, the works of a recognized artist — . We’d use an algorithm called neural-style, based on a powerful deep network for image processing.

But it’s much easier to see what it does by looking at the resulting pictures. These pictures are worth a thousand hours of research or lines of code ;)

The mysterious workings of this algorithm start by reusing a prebuilt deep neural network known as VGG19, developed by Oxford researchers and winner of the ImageNet Challenge 2014 image processing competition. This network employs multiple layers of Convolutional Neural Networks (CNNs) to refine an image from raw pixels to higher-level, more conceptual representations of the picture. So high in fact that the style can be faithfully represented by the correlations emerging at the deepest levels of the network. It is through this raw-pixels-versus-style decomposition that the network can be exploited to create new images, redrawing the pixels of a photo in the style of a different picture.

Until now implementing and deploying this algorithm was not easy task for a number of reasons. We’ll explain in the rest of this blog post how to do it with no more than three commands. But before we do it, let us explained a few more details. Now that we figured out how to perform the strokes (VGG19) we need:

  • Our painting palette: torch DL libraries, that allow implementing our neural painter (based on the works of Justin Johnson)
  • and a way of accelerating our artwork production as we don´t want to spend hours days, weeks or months to wait for the results. We’ll use graphics processing units -GPU- together with a traditional CPU, to accelerate our deep learning algorithm (our strokes) and reduce the wait to a few minutes.

Problems of modern painting

We are nearly there. But first we have to resolve our last challenge. Innovation, dependencies and changes in ML, tools and libraries often mean that we could easily break our working environment. Just the sheer amount of libraries and requisites needed to run this algorithm poses a problem:

  • Nvidia GPU drivers, for using the GPUs
  • CUDA development kit, for controlling the GPUs
  • cudnn libraries, for GPU deep network calculations
  • torch7, a framework for deep networks development, and its dependencies (protobuf)
  • loadcaffe lua module for loading prebuilt networks, this is how we´ll apply VGG19.

We need all these software to automate the procedure of running the algorithm with different images, styles and parameters in order to attain the most impressive results. All of these resulted in a brittle environment, where the whole setup stops working if any unaverted update is introduced in the GPU drivers or torch version. This means that not only the first environment setup is burdensome, but also that we will be required to repeat this effort again from time to time to keep everything in good shape. This is far from ideal: the canvas must be at hand when sudden inspiration strikes the artist, and likewise our tools must be ready to use once we feel the burst of creativity, never subject to the pesky dependencies of an ever-changing environment.

Docker comes to mind as the obvious solution to this problem. But there is a catch: Docker isolates our process from its environment, and that includes the particular hardware resources in the host machine. Unfortunately, our deep networks algorithm requires direct access to a GPU. Docker is the way, but we need something else.

Image from : https://github.com/NVIDIA/nvidia-docker

The answer to our demands is nvidia-docker. A wrapper to docker, that allows us to run containers leveraging NVIDIA GPUs. Through this command, Docker will automatically mount the host GPU drivers into any running container through a volume, thus creating a loophole through which any Docker process can run code in the host GPU. This will work regardless of the particular GPU available on the host or the code running in the container.

In our particular artistic project, our host requirements are reduced dramatically: out of the long list above, now we just need to install Docker, nvidia-docker and the appropriate GPU drivers. The rest of dependencies will be contained within a Docker image, built in a reproducible way through the use of a Dockerfile and guaranteeing that all the moving pieces are frozen at interoperable and working versions.

Painting tools and supplies

Now we only need a place to run our system. We are going to rely on the public cloud, in particular, AWS and their GPU-optimized virtual machines. AWS offer two families of GPU instances but we are going to use the brand-new P2 AWS EC2 instances. These instances are designed to chew through tough deep learning workloads like the one we have in our hands. Let´s provision it:

$ docker-machine create — driver amazonec2 \
-- amazonec2-instance-type p2.xlarge \
-- amazonec2-access-key *** \
-- amazonec2-secret-key *** nvidia-docker

Before running this command, you will need to:

  1. Install Docker on Mac, Docker on Windows or docker-machine in your laptop/desktop if you haven’t done so.
  2. Have an account on AWS. Unfortunately, P2 or G2 instances are not eligible for the AWS free tier plan…but we could do plenty of art for less than $10. If your laptop or desktop has an NVIDA chip you could run our scripts locally!
  3. Create an access/secret key pair, and
  4. Since you are not allowed to use P2 or G2 instances by default on AWS, open a ticket to AWS support to increase the limit of P2 or G2 instances you could use. It takes no more than a few hours to get it.

Installing NVIDIA drivers and nvidia-docker is our second step in our journey to become a (fake) artist. We’ve put together a simple script that does the work for us:

$ docker-machine ssh nvidia-docker
$ git clone
https://github.com/albarji/neural-style-docker
$ cd neural-style-docker
$ ./scripts/install-nvidia.sh

If everything goes according to plan, you´ll get the following output:

With this last command we are just installing nvidia software package and querying the GPU card to make sure that everything is ok.

In order to draw, you must close your eyes and sing…

Said once Picasso. But before you close your eyes, here´s our last step: deploying the ML algorithm that is going to do the magic:

$ ./scripts/fake-it.sh goldengate.jpg vangogh.jpg

Now, you only have to download the resulting images and publish them in an art forum like Devianart or show to your local art gallery ;). From your laptop/desktop, launch this:

$ docker-machine scp -r docker-nvidia:/home/ubuntu/neural-style-docker/output .

And gather your results in the current directory (in our example, goldengate_by_vangogh.jpg).

Here are a few more examples of what you could do:

Give me a museum…

…and I’ll fill it, said once again Pablo Picasso. We could now, too! And without spending a lifetime on it. We have provisioned some styles and some content to get you started, so you only have to sit back, relax, close your eyes and sing. By just applying these different styles to the same content image, we could have an idea on how different painters would have represented the same scene or tackled a portrait.

As a tribute to Docker and for the hours, days, weeks of our lives that saved us….we’ve decided to open our own Docker museum:

Afremov imagining Docker
Docker was used in the Roman Empire, as this old mosaic proves
This alleyway depicts an urban graffiti of Docker
A modern dockerized city by Hundertwasser
A classic dockerized city by Renoir
Ancient greek pottery was distributed in containers
Picasso innovated a great deal using Docker
Van Gogh was impressed by Docker
Every math professor knows about the containerability theorem
Docker run potatoes

Now it´s your turn. Pick your favourite artist or art piece and some of your photographs and transform them into a piece of art. Share your work with us! And follow us on Twitter (@albarjip and @lherrerabenitez).

PS : Don’t forget to stop and delete your P2 instances when you finish.

$ docker-machine rm nvidia-docker

Our galleries

Check “our” art work at:

ML, DL and Docker References

New to ML, DL or Docker and want to know more Try these links…

Building and optimizing the docker image

We´d like to improve and reduce the size of our docker image. Could you help us?

  • Clone the project
$ git clone https://github.com/albarji/neural-style-docker
  • Modify any of the files of the project
  • Build the new image:
$ sudo nvidia-docker build -t neural-style:2.0 .
  • Test it works by running the example in the docs:
$ sudo nvidia-docker run — rm -v $(pwd):/images — entrypoint python neural-style /neural-style/variants.py — contents img/docker.png — styles img/starryNight.jpg — outfolder

--

--

Luis Herrera Benítez
DevOpsion

AI & Big Data aficionado. Redis enthusiast. Xoogler. Fomer Docker Captain and AWS Ambassador. Everybody has a plan until they get punched in the face.