How to use Nvidia GPU in docker to run TensorFlow

Alexander Dydychkin
vicuesoft-techblog
Published in
2 min readJul 25, 2019
Photo by Caspar Camille Rubin on Unsplash

Hi folks,

In this article I want to share with you very short and simple way how to use Nvidia GPU in docker to run TensorFlow for your machine learning (and not only ML) projects.

Prerequisites:

  • Nvidia videocard
  • Ubuntu 18.04
  • Installed Nvidia driver

Let`s go

  1. Install Docker on your OS:
sudo apt-get install docker-ce
docker run hello-world #Test docker that it installed correct:

2. Install official Nvidia docker:

Add Nvidia repository to get Nvidia docker container (these code can be executed on various Ubuntu versions):

curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update

Install nvidia container with gpu support and restart docker daemon:

sudo apt-get install nvidia-docker2
sudo pkill -SIGHUP dockerd

After that you have nvidia runtime with support of your gpu in container!
Remember that you can use it with any Docker container.

3. Now we can check our nvidia runtime by calling nvidia-smi (nvidia-smi tool used for monitoring your GPU):

docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi

Profit! :)

Let`s bench our configuration

To do it we will use script https://learningtensorflow.com/lesson10/:

import sys
import numpy as np
import tensorflow as tf
from datetime import datetime
device_name = sys.argv[1] # Choose device from cmd line. Options: gpu or cpu
shape = (int(sys.argv[2]), int(sys.argv[2]))
if device_name == "gpu":
device_name = "/gpu:0"
else:
device_name = "/cpu:0"
with tf.device(device_name):
random_matrix = tf.random_uniform(shape=shape, minval=0, maxval=1)
dot_operation = tf.matmul(random_matrix, tf.transpose(random_matrix))
sum_operation = tf.reduce_sum(dot_operation)
startTime = datetime.now()
with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as session:
result = session.run(sum_operation)
print(result)
# It can be hard to see the results on the terminal with lots of output -- add some newlines to improve readability.
print("\n" * 5)
print("Shape:", shape, "Device:", device_name)
print("Time taken:", str(datetime.now() - startTime))

Let`s start with using original tensorflow image:

docker run --runtime=nvidia --rm -ti -v "${PWD}:/app" tensorflow/tensorflow:latest-gpu-jupyter python /app/benchmark.py cpu 10000
docker run --runtime=nvidia --rm -ti -v "${PWD}:/app" tensorflow/tensorflow:latest-gpu-jupyter python /app/benchmark.py gpu 10000

Results:

# with CPU (docker)
(‘Shape:’, (10000, 10000), ‘Device:’, ‘/cpu:0’)
(‘Time taken:’, ‘0:00:02.699631’)
# with GPU (docker)
(‘Shape:’, (10000, 10000), ‘Device:’, ‘/gpu:0’)
(‘Time taken:’, ‘0:00:00.804804’)

To compare native start on Ubuntu 18.04 (without docker):

Shape: (10000, 10000) Device: /cpu:0
Time taken: 0:00:04.583930
Shape: (10000, 10000) Device: /gpu:0
Time taken: 0:00:00.783128

Conclusion

As a result we can use our Nvidia GPU in docker container. Our benchmark result show us that we gain huge speed up when we use gpu solution in docker. Moreover I got better results in docker container than on native OS on my configuration.

--

--