What is CUDA and cuDNN? How to provide Tensorflow GPU support?

Emre Can Yesilyurt
Machine Learning Turkiye
5 min readOct 17, 2020

Hello, in this article, I explained the concepts of CUDA and cuDNN. I also shared technical information on how to enable Tensorflow GPU support.

You can also access my github repository where you can directly access technical information about the installation.

Many companies today have many GPUs on their servers to process the data they get. In addition, complex operations such as computer vision and predictive drawing rely on GPU power.

GPUs need some libraries to perform these operations. So let’s talk about these.

What is CUDA?

When CUDA performs a computation with the GPU, it ensures that the compiler is directed to the part that matches the GPU core. It makes the processing unit efficient for the operation to be performed. You can also think of it as a pointer.

What is cuDNN?

It provides a performance boost when training a neural network with a GPU. This performance increase is about 5.6x for most libraries. This is not an insignificant value.

Where is Tensorflow in that?

This article is not for you if you primarily wonder what TensorFlow is.

Tensorflow is the layer where the developer interacts directly. Tensorflow GPU support allows the developer to use the GPU as a resource, with CUDA and cuDNN running in the background.

How to provide Tensorflow GPU support?

For this process, we first need to install CUDA and cuDNN. I should point out that you need to use an official NVIDIA driver to install CUDA and cuDNN.

It is not possible to use CUDA and cuDNN with open source drivers.

Application versions and hardware.

CUDA 10.1

cuDNN v7.6.5 for CUDA 10.1

Nvidia 1660 Super GPU

Before you install it, you need to know what to install. CUDA, cuDNN and Nvidia Driver versions must be compatible with each other.

Let’s take a look at these compatibilities.

Check the driver you are using.

Open ‘’Additional Driver’’

If you are using the Xorg open-source graphics driver, you must choose one of the official Nvidia drivers and click the “Apply Changes” button.

A small footnote: If you encounter the phrase “Continue using a manually installed driver, “ you must follow the steps below.

Find the repo of the driver you installed earlier on the terminal.

$ sudo nano /etc/apt/sources.list

After the relevant repo is found in the opened section, it is made into a comment line.

Next;

$ sudo apt update

And restart your computer.

$ reboot

After this process, you can install the driver from the “Additional Driver” section or manually.

You can now install CUDA and cuDNN.

Summary of steps;

  • Installing CUDA 10.1
  • The cuDNN version (v7.6.5 for CUDA 10.1) compatible with CUDA 10.1 is installed.
  • Import CUDA environment variables into the terminal profile.
  • Installing Tensorflow.

Let’s start.

Installing CUDA Toolkit 10.1.

$ sudo apt install nvidia-cuda-toolkit

Check if you are successful.

$ nvcc -V

Expected output

Installing cuDNN

You must have an NVIDIA account to be able to download cuDNN. Download the compatible cuDNN version from this link after logging in with your NVIDIA account.

Since I’m using CUDA 10.1, I chose cuDNN v7.6.5 for CUDA 10.1.

cuDNN is in the directory we downloaded;

$ tar -xvzf cudnn-10.1-linux-x64-v7.6.5.32.tgz

Copy the extracted files to the directory where Cuda is installed.

$ sudo cp cuda/include/cudnn.h /usr/lib/cuda/include/$ sudo cp cuda/lib64/libcudnn* /usr/lib/cuda/lib64/

Set file permissions.

$ sudo chmod a+r /usr/lib/cuda/include/cudnn.h /usr/lib/cuda/lib64/libcudnn*

We add the Cuda environment variables to the terminal profile.

nano ~/.bashrc

If you’re using ZSH → nano ~/.zshrc

At the end of the opened section, we add the following lines.

export LD_LIBRARY_PATH=/usr/lib/cuda/lib64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/usr/lib/cuda/include:$LD_LIBRARY_PATH

Restart the terminal profile.

$ source ~/.bashrc

ZSH → source ~/.zshrc

Install Tensorflow.

$ pip3 install tensorflow

Check if you are successful.

$ python3
>>> import tensorflow as tf
>>> tf.config.list_physical_devices("GPU")

Expected output;

Why use CUDA 10.1?

There must be a file called libcudart.so in CUDA. This file is deprecated as of CUDA11. CUDA versions offer backward support, but Nvidia specifies the necessary versions for TensorFlow GPU. (Cuda isn’t just a build for TensorFlow-GPU.)

Why Ubuntu?

Consider the following caveat for Tensorflow.

For other libraries, again, Ubuntu is the most used distribution. This doesn’t mean “Ubuntu is the best distribution.” Ubuntu has a large community behind it. For technical issues, users often gravitate towards distributions with the largest community behind them.

Thank you for reading, and you can reach me from the links below.

--

--