Making TensorFlow Object Detection API Work on NVIDIA RTX 20 Super Series

Andika Rachman
2 min readNov 9, 2019

--

Photo by Caspar Camille Rubin on Unsplash

The easiest way to make TensorFlow Object Detection API work on NVIDIA RTX 20 Super Series is by using NVIDIA GPU-Accelerated Container (NGC). NGC offers a comprehensive catalog of GPU-accelerated software for deep learning and machine learning frameworks.

Requirements:

  • NVIDIA GPU RTX 20 Super Series
  • Docker and NVIDIA Container Toolkit installed
  • Ubuntu 18.04

Step 1: Install NVIDIA Graphic Driver, CUDA, and cuDNN

We need to make sure to install compatible driver, CUDA, and cuDNN. NVIDIA GPU RTX 20 Super Series is powered by Turing architecture, which is only supported by NVIDIA driver version ≥ 410.48 and CUDA version ≥ 10.0.x. Refer to this link to check the compatibility among NVIDIA Graphic Driver, CUDA, and cuDNN.

  • To install NVIDIA Graphic Driver on Ubuntu 18.04, we can follow this excellent guidance by Antonio Sze-To.
  • To install CUDA, follow this guidance provided by NVIDIA.
  • To install cuDNN, follow this guidance provided by NVIDIA.

Step 2: Install Docker and NVIDIA Container Toolkit

The NVIDIA Container Toolkit allows users to build and run GPU accelerated Docker containers.

Step 3: Pull and Run An NGC

NGC offers a comprehensive catalog of GPU-accelerated software for deep learning and machine learning. To pull one of the containers, type the following command:

docker pull nvcr.io/nvidia/tensorflow:xx.xx-pyx

Replace xx.xx-pyx with the version that we need. In my case, I use 19.04-py3 and it runs without any problems for TensorFlow Object Detection API.

After a container is pulled, we can run it using Docker. A typical command to launch a container is:

docker run --gpus all -it --rm -v local_dir:container_dir nvcr.io/nvidia/tensorflow:xx.xx-pyx

Where:

  • -it means run in interactive mode
  • --rm will delete the container when finished
  • -v is the mounting directory
  • local_dir is the directory or file from your host system (absolute path) that you want to access from inside your container. For example, the local_dir in the following path is /home/jsmith/data/mnist.
  • If you are inside the container, for example, ls /data/mnist, you will see the same files as if you issued the ls /home/jsmith/data/mnist command from outside the container.
  • container_dir is the target directory when you are inside your container. For example, /data/mnist is the target directory in the example:
  • xx.xx is the container version. For example, 19.01.
  • pyx is the Python version. For example, py3.

The complete guidance on how to use NGC on TensorFlow can be found here.

Now, you’re all set up to run TensorFlow Object Detection API on NVIDIA RTX 20 Super Series!

--

--

Andika Rachman

PhD in Applied AI | Computer Vision & Machine Learning Engineer