OpenGL and CUDA Applications in Docker

glmark2 running in a Docker container and displaying on the host

Trying to run an OpenGL application in a Docker container? Want to containerize a program that uses CUDA or Tensorflow and has a graphical aspect to it? Well thanks to the NVIDIA Container Toolkit and the GL Vendor-Neutral Dispatch Library (glvnd), this is now possible. OpenGL code can be compiled and run directly in a container, taking full advantage of an NVIDIA GPU. And by connecting to the host’s X server, the display can be seen on the host machine.

This is a brief get-you-up-and-running article, and I assume familiarity with Docker and Linux. (Oh, and if you’re using Windows then kindly move along. There’s nothing to see here.) You’ll need to install the proprietary NVIDIA drivers and the NVIDIA Container Toolkit on your host, which is outside of the scope of this article. For the record, I’m using Ubuntu 18.04 with an NVIDIA GTX 1080 and version 440 of the driver (nvidia-driver-440). That combination works for me.

As a side note, you do not need to install the CUDA toolkit on the host, but with the NVIDIA Container Toolkit you can use CUDA in a container.

glvnd and X Dependencies

The GL Vendor-Neutral Dispatch Library (glvnd) is an abstraction layer that sends OpenGL calls to one of potentially many drivers. Its common use-case is to dispatch calls to the correct driver on systems where drivers from multiple vendors are installed. There’s a nice architecture diagram in the glvnd repository that shows how the library interfaces with X and arbitrates OpenGL calls, routing them to the correct driver.

Another use-case, the one that this article focuses on, is to use glvnd to dispatch OpenGL calls from a container to the host’s NVIDIA libraries, thereby allowing a container to take advantage of a GPU.

NVIDIA provides a bunch of sample Docker images in their Gitlab, and it’s a great starting point if you’re making your own custom images. That said, documentation is sparse to nonexistent, so finding a pertinent example takes time. For OpenGL they have a few glvnd images in various flavors: Ubuntu 14.04, 16.04, 18.04, and centos7. Below Ubuntu 18.04 is used as a base image. It has glvnd in the apt repositories, which makes things easy, but if you’re using an older version of Ubuntu you’ll need to compile glvnd as part of the image manually.

Four packages are needed to get glvnd working. glvnd itself, obviously, and the vendor-neutral dispatch libraries for:

  1. legacy OpenGL (GL);
  2. the OpenGL X extension (GLX);
  3. and the native platform graphics interface (EGL).

To compile code inside the container, the dev versions of these libraries are needed. (pkg-config might also be useful to help a C/C++ compiler locate the libraries and headers.)

And if you’re using an embedded system like an NVIDIA Jetson or something like that then you’ll also need the OpenGL ES libraries: libgles2 for running applications; libgles2-mesa-dev for building.

Finally, you’ll need some libraries for interfacing with X — the miscellaneous extensions library and the client library, namely.

Depending on your application, you may want some other libraries, like the OpenGL Utility Toolkit (freeglut3 or freeglut3-dev).

And finally, a few environment variables are needed by the underlying NVIDIA container runtime: NVIDIA_VISIBLE_DEVICES=all and NVIDIA_DRIVER_CAPABILITIES=graphics,utility,compute.

Putting all of that together, here’s a starting point for a Dockerfile.

The image can be built like so:

Connecting to the Host’s X Server

There are plenty of articles on the intertubes about exposing X and connecting to the host’s X server from a container. The Open Source Robotics Foundation, for example, covers the topic extensively, here. But in a nutshell, to allow connections to X, run

There are security implications to exposing X in this manner, especially if you’re on a shared system, so read the article linked above and make sure you know what you’re doing! It describes more restrictive ways of exposing X, but it’s outside of the scope of this article.

The /tmp/.X11-unix directory needs to be mounted into the container. This is where X’s Unix socket resides, and the container needs access to the socket to connect to the host’s X server.

DISPLAY also needs to be set in the container’s environment, which tells the container which display (screen, loosely speaking) to use. This variable can be passed from the host’s DISPLAY variable (it’s usually :1).

Depending on your application and desktop environment, you might need to set the environment variable QT_X11_NO_MITSHM to 1. This prevents QT-based applications from using X’s shared memory extension, which Docker isolation blocks. (An alternative is to enable inter-process communication between the host and container using Docker’s --ipc host switch.)

All said and done, here’s how to get a bash shell up and running.

Inside the container, a good way to test that the GPU is being used is to install and run the OpenGL benchmark application glmark2.

If everything is working probably, an anatomically-correct horse should pop up, and some details about the GPU and driver will be logged in the terminal.

Bonus: Running Tensorflow 2 and OpenAI Gym in a Container

Since I mentioned Tensorflow in the intro, and because I use Tensorflow and OpenAI Gym often, here’s an image for getting OpenAI Gym and Tensorflow 2 containerized. It’s pretty much the same as the above image, but the base image is tensorflow:2.2.0-gpu.

And here is a screenshot of the BipedalWalker-V3 environment running in the container and displaying on the host. Neat!

That’s all, folks

I hope this article helps you out. If you have any comments or questions, drop me a line below.

Need a Developer?

Get in touch! We’re a small software company with skills in machine learning, graphical applications, deployment orchestration, web development, and more. We would love to help with your next project.

References

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store