HW accelerated GUI apps on Docker

Piergiorgio Niero
6 min readFeb 13, 2017

--

Given that the large majority of examples leveraging Docker are specific to server development and management, Docker is often perceived as a server specific technology instead of a purpose agnostic tool.

Using Docker for GUI applications can unlock at least a few interesting use cases providing easier\faster configurations, enhanced portability into our desktop environment:

  • How much time could we save replacing the configuration of our environment (dev\CI\you-name-it..) by a docker-compose file?
  • How much easier would running N different versions of a browser be?

At Plumbee GSN Games we conducted an investigation into running GUI applications within a Docker container, and here’s our findings.

X window system architecture 101

In a *nix system a GUI application has the role of “X client”. Each time it redraws its content a sequence of graphics commands is encoded into the X protocol using a library (usually Xlib) and transmitted into the X11 socket.
At the other end an X server reads such commands from the socket and renders them onto a display.

simplified drawing pipeline

Containerizing a GUI app

Taking a look at the X window system architecture it’s clear that in order to make our containerized GUI apps capable of drawing on a screen we need to give it write access to the X11 socket, and we need an X server to consume and render the graphics commands onto a display.

We can approach this problem from two angles:

  • we can bundle xvfb and a VNC server with our container image
  • or we can share the host’s X11 socket with the container as an external volume
What is bundled in the docker image in both approaches: the main difference is using a provided or a bundled X window system

The first approach is the easier one to implement, it works out of the box and VNC clients are widely available.
At the same time we have to consider that xvfb creates an in-memory representation of the Display, and VNC keeps in memory the last updated area, so the worst case scenario is that we end up with the size of the frame buffer allocated twice per each running container instance.

Sharing the host’s X11 socket with the container is a much more modular solution that allows for a much leaner Dockerfile and it allows the owner of the X server to switch the implementation according to its needs (potentially our host’s X server can be even xvfb itself).

Getting our hands dirty

Now that we understand the theory, let’s try to run xeyes on a fresh ubuntu container rendering on

xhost +local:root; \
docker run -d \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix:rw \
ubuntu:latest \
sh -c 'apt-get update && apt-get install -qqy x11-apps && xeyes'

The command above:

  1. enables the root user to make connections to the running X server (please consider reading xhost man page)
  2. pulls and runs an instance of ubuntu:latest, mounting the host’s X11 socket and setting the host’s DISPLAY into the container environment
  3. installs and runs xeyes inside the container instance, rendering on the host’s X server

If everything worked correctly we should see something like the image below (notice the caption “on 248…” specifying the hash of the docker container running the program)

xeyes running on Docker

Enabling hardware acceleration (on Nvidia GPUs)

Note to the reader:
The following section applies only if you run a system with an Nvidia GPU. In my case I am using an AWS G2 instance mounting an Nvidia Grid card, here you can find how I setup my machine.

Given that we are able to run any GUI application, running a GPU accelerated app such as glxgears should be as simple as running the following command:

xhost +local:root; \
docker run -ti --rm \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix:rw \
ubuntu:latest \
sh -c ‘apt-get update && apt-get install -qqy mesa-utils && glxgears’

…but unfortunately it’s not.

What this error actually stands for is: “Sorry pal, I cannot find the GPU”.
In fact, even if our host lists correctly our graphic card (ls /dev | grep nvidia) such device is not listed when running exactly the same command from the container (actually NO graphic card is detected inside docker).

The piece we are missing is the nvidia-docker-plugin, which is described in its repo readme file as:

a Docker plugin designed to ease the process of deploying GPU-aware containers in heterogeneous environments. It acts as a daemon process, discovering host driver files and GPU devices and answers to volume mount requests originating from the Docker daemon

We can use nvidia-docker-plugin with the standard docker CLI, but Nvidia provides a thin wrapper (nvidia-docker) which does all the plumbing for us making our life easier.

Remote system vs local system

Running the same command replacing docker CLI by nvidia-docker does the job on a local system, but if we’re running it on a remote system — let’s say on an EC2 instance remotely controlled via VNC… — we likely need to run VirtualGL to the party as the display grabbed by VNC is not hardware accelerated (I wrote something more on this topic under “Why do we need VirtualGL?” in this other post).

Once we installed VirtualGL in our container we can finally rejoice in watching glxgears finally running!!!
(Remember that we need to launch it with vglrun in order to enable VirtualGL)

I never got so excited watching glxgears!!!

A common ground

VirtualGL plays a very important role in our setup and installing it in every Dockerfile becomes definitely too verbose.
At Plumbee GSN Games we decided to create a base image providing VirtualGL 2.5.1 running on Ubuntu 16.04 and to derive our images requiring GPU acceleration from there.
You can find the repository on github at the link below

A common base image can facilitate the setup for many use cases, here’s the launch command for our beloved glxgears

xhost +local:root; \
nvidia-docker run -d \
--env="DISPLAY" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
plumbee/nvidia-virtualgl vglrun glxgears

here’s one for firefox

xhost +local:root; \
nvidia-docker run -d \
--env="DISPLAY" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
plumbee/nvidia-virtualgl \
sh -c 'apt-get update && apt-get install -qqy firefox && vglrun firefox'

blender…

nvidia-docker run -d \
--env="DISPLAY" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
plumbee/nvidia-virtualgl sh -c 'apt-get update && apt-get install -qqy blender && vglrun blender'

and here’s them all running each on its own container in a G2 EC2 box!

3 hardware accelerated docker containers running blender, firefox, and (obviously) glxgears, on a G2.2xlarge instance on AWS

Congratulations!

We now have fire and forget dockerized hardware accelerated GUI applications, and the possibilities are endless: should we install steam straight away? :D

Piergiorgio Niero
Director of Engineering @GSN London

--

--

Piergiorgio Niero

@pigiuz on Twitter — Dad at home — Head of Engineering at SuperAwesome — views are my own