OpenGL and CUDA Applications in Docker

Ben Botto
6 min readJun 6, 2020
glmark2 running in a Docker container and displaying on the host

Trying to run an OpenGL application in a Docker container? Want to containerize a program that uses CUDA or Tensorflow and has a graphical aspect to it? Well thanks to the NVIDIA Container Toolkit and the GL Vendor-Neutral Dispatch Library (glvnd), this is now possible. OpenGL code can be compiled and run directly in a container, taking full advantage of an NVIDIA GPU. And by connecting to the host’s X server, the display can be seen on the host machine.

This is a brief get-you-up-and-running article, and I assume familiarity with Docker and Linux. (Oh, and if you’re using Windows then kindly move along. There’s nothing to see here.) You’ll need to install the proprietary NVIDIA drivers and the NVIDIA Container Toolkit on your host, which is outside of the scope of this article. For the record, I’m using Ubuntu 18.04 with an NVIDIA GTX 1080 and version 440 of the driver (nvidia-driver-440). That combination works for me.

As a side note, you do not need to install the CUDA toolkit on the host, but with the NVIDIA Container Toolkit you can use CUDA in a container.

glvnd and X Dependencies

The GL Vendor-Neutral Dispatch Library (glvnd) is an abstraction layer that sends OpenGL calls to one of potentially many drivers. Its common use-case is to dispatch calls to the correct driver on…

--

--