A brief guide on running TensorFlow on GPU

Dhanoop Karunakaran
Intro to Artificial Intelligence
3 min readOct 25, 2023

Many people are already aware of how to run the TensorFlow library on GPU. But to me, it took a couple of days of effort to figure out what is the best approach to run it. I’m just writing this article for those who struggle to run the TensorFlow on GPU.

To support the GPU, each TensorFlow version is required to have specific versions of CUDA, and cuDNN with Python as shown below. This makes it very difficult to run these libraries natively on Ubuntu as installing the correct version of each software cumbersome task.

The table lists the compatible versions of CUDA and cuDNN with TensorFlow. Source: [1]

As I am not an Ubuntu expert, it was a painful experience for me to match the exact compatibility of various libraries. It turns out that TensorFlow provides the Docker version which can support the GPU and the only requirement is to install the NVIDIA driver. Here is the step-by-step approach to running the TensorFlow on GPU using the latest Docker version.

  1. Follow the instructions on this site to install the docker
  2. Make sure you have installed the NVIDIA GPU driver compatible with the GPU. It can be done through software
  3. By default, GPU is not accessible in docker containers. To get the GPU access in a docker container, we need to install nvidia-container-toolkit. Execute the below commands in the terminal to install the toolkit. Refer to this link if you have any trouble installing it.
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list \
&& \
sudo apt-get update

sudo apt-get install -y nvidia-container-toolkit

Please note that nvidia-container-toolkit is only suitable for docker version 19.03

4. The below code is to run a specific TensorFlow version (2.14.0) on GPU which has Jupyter Notebook installed.

docker pull tensorflow/tensorflow:2.14.0-gpu-jupyter

docker run --gpus all -it -p 8888:8888 -v /home/beastan/Documents/blogs_code/peft:/tf tensorflow/tensorflow:2.14.0-gpu-jupyter

Please check out this link to see various built-in TensorFlow containers

5. Once we execute the last command in the above step, it outputs the localhost link which we can paste to the browser to see the Jupyter Notebook.

6. To test whether the GPU is working on the TensorFlow container, we can create a notebook in Jupyter and run the below command.

import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))

That’s it, by following these steps, we can run the TensorFlow on GPU.

If you like my write-up, follow me on Github, Linkedin, and/or Medium profile.

Reference

  1. https://punndeeplearningblog.com/development/tensorflow-cuda-cudnn-compatibility/
  2. https://www.tensorflow.org/install/docker

--

--