OpenGL and CUDA Applications in Docker

Trying to run an OpenGL application in a Docker container? Want to containerize a program that uses CUDA or Tensorflow and has a graphical aspect to it? Well thanks to the NVIDIA Container Toolkit and the GL Vendor-Neutral Dispatch Library (glvnd), this is now possible. OpenGL code can be compiled and run directly in a container, taking full advantage of an NVIDIA GPU. And by connecting to the host’s X server, the display can be seen on the host machine.
This is a brief get-you-up-and-running article, and I assume familiarity with Docker and Linux. (Oh, and if you’re using Windows then kindly move along. There’s nothing to see here.) You’ll need to install the proprietary NVIDIA drivers and the NVIDIA Container Toolkit on your host, which is outside of the scope of this article. For the record, I’m using Ubuntu 18.04 with an NVIDIA GTX 1080 and version 440 of the driver (nvidia-driver-440
). That combination works for me™.
As a side note, you do not need to install the CUDA toolkit on the host, but with the NVIDIA Container Toolkit you can use CUDA in a container.
glvnd and X Dependencies
The GL Vendor-Neutral Dispatch Library (glvnd) is an abstraction layer that sends OpenGL calls to one of potentially many drivers. Its common use-case is to dispatch calls to the correct driver on systems where drivers from multiple vendors are installed. There’s a nice architecture diagram in the glvnd repository that shows how the library interfaces with X and arbitrates OpenGL calls, routing them to the correct driver.
Another use-case, the one that this article focuses on, is to use glvnd to dispatch OpenGL calls from a container to the host’s NVIDIA libraries, thereby allowing a container to take advantage of a GPU.
NVIDIA provides a bunch of sample Docker images in their Gitlab, and it’s a great starting point if you’re making your own custom images. That said, documentation is sparse to nonexistent, so finding a pertinent example takes time. For OpenGL they have a few glvnd images in various flavors: Ubuntu 14.04, 16.04, 18.04, and centos7. Below Ubuntu 18.04 is used as a base image. It has glvnd in the apt repositories, which makes things easy, but if you’re using an older version of Ubuntu you’ll need to compile glvnd as part of the image manually.
Four packages are needed to get glvnd working. glvnd itself, obviously, and the vendor-neutral dispatch libraries for:
- legacy OpenGL (GL);
- the OpenGL X extension (GLX);
- and the native platform graphics interface (EGL).
libglvnd0 libgl1 libglx0 libegl1
To compile code inside the container, the dev versions of these libraries are needed. (pkg-config
might also be useful to help a C/C++ compiler locate the libraries and headers.)
libglvnd-dev libgl1-mesa-dev libegl1-mesa-dev
And if you’re using an embedded system like an NVIDIA Jetson or something like that then you’ll also need the OpenGL ES libraries: libgles2
for running applications; libgles2-mesa-dev
for building.
Finally, you’ll need some libraries for interfacing with X — the miscellaneous extensions library and the client library, namely.
libxext6 libx11-6
Depending on your application, you may want some other libraries, like the OpenGL Utility Toolkit (freeglut3
or freeglut3-dev
).
And finally, a few environment variables are needed by the underlying NVIDIA container runtime: NVIDIA_VISIBLE_DEVICES=all
and NVIDIA_DRIVER_CAPABILITIES=graphics,utility,compute
.
Putting all of that together, here’s a starting point for a Dockerfile.
FROM ubuntu:18.04# Dependencies for glvnd and X11.
RUN apt-get update \
&& apt-get install -y -qq --no-install-recommends \
libglvnd0 \
libgl1 \
libglx0 \
libegl1 \
libxext6 \
libx11-6 \
&& rm -rf /var/lib/apt/lists/*# Env vars for the nvidia-container-runtime.
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES graphics,utility,compute
The image can be built like so:
docker build -t glvnd-x:latest .
Connecting to the Host’s X Server
There are plenty of articles on the intertubes about exposing X and connecting to the host’s X server from a container. The Open Source Robotics Foundation, for example, covers the topic extensively, here. But in a nutshell, to allow connections to X, run
xhost +local:root
There are security implications to exposing X in this manner, especially if you’re on a shared system, so read the article linked above and make sure you know what you’re doing! It describes more restrictive ways of exposing X, but it’s outside of the scope of this article.
The /tmp/.X11-unix
directory needs to be mounted into the container. This is where X’s Unix socket resides, and the container needs access to the socket to connect to the host’s X server.
DISPLAY
also needs to be set in the container’s environment, which tells the container which display (screen, loosely speaking) to use. This variable can be passed from the host’s DISPLAY
variable (it’s usually :1
).
Depending on your application and desktop environment, you might need to set the environment variable QT_X11_NO_MITSHM
to 1
. This prevents QT-based applications from using X’s shared memory extension, which Docker isolation blocks. (An alternative is to enable inter-process communication between the host and container using Docker’s --ipc host
switch.)
All said and done, here’s how to get a bash shell up and running.
# Expose the X server on the host.
sudo xhost +local:root# --rm: Make the container ephemeral (delete on exit).
# -it: Interactive TTY.
# --gpus all: Expose all GPUs to the container.
docker run \
--rm \
-it \
--gpus all \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=$DISPLAY \
-e QT_X11_NO_MITSHM=1 \
glvnd-x \
bash
Inside the container, a good way to test that the GPU is being used is to install and run the OpenGL benchmark application glmark2.
apt-get update \
&& apt-get install -y -qq glmark2 \
&& glmark2
If everything is working probably, an anatomically-correct horse should pop up, and some details about the GPU and driver will be logged in the terminal.

root@6c12a422fcbc:~/dev# glmark2
=======================================================
glmark2 2014.03+git20150611.fa71af2d
=======================================================
OpenGL Information
GL_VENDOR: NVIDIA Corporation
GL_RENDERER: GeForce GTX 1080/PCIe/SSE2
GL_VERSION: 4.6.0 NVIDIA 440.82
=======================================================
[build] use-vbo=false: FPS: 4192 FrameTime: 0.239 ms
[build] use-vbo=true: FPS: 15702 FrameTime: 0.064 ms
Bonus: Running Tensorflow 2 and OpenAI Gym in a Container
Since I mentioned Tensorflow in the intro, and because I use Tensorflow and OpenAI Gym often, here’s an image for getting OpenAI Gym and Tensorflow 2 containerized. It’s pretty much the same as the above image, but the base image is tensorflow:2.2.0-gpu
.
FROM tensorflow/tensorflow:2.2.0-gpu# Dependencies for glvnd and X11.
RUN apt-get update \
&& apt-get install -y -qq --no-install-recommends \
libxext6 \
libx11-6 \
libglvnd0 \
libgl1 \
libglx0 \
libegl1 \
freeglut3-dev \
&& rm -rf /var/lib/apt/lists/*# Env vars for the nvidia-container-runtime.
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES graphics,utility,compute# Add gym.
RUN pip install --upgrade pip \
&& pip install gym==0.17.2 box2d==2.3.10
And here is a screenshot of the BipedalWalker-V3
environment running in the container and displaying on the host. Neat!

That’s all, folks
I hope this article helps you out. If you have any comments or questions, drop me a line below.
Need a Developer?
Get in touch! We’re a small software company with skills in machine learning, graphical applications, deployment orchestration, web development, and more. We would love to help with your next project.