Tensorflow GPU with WSL2

Nelson Punch
Software-Dev-Explore
9 min readApr 16, 2024
Photo by Martino Pietropoli on Unsplash

Introduction

This is a walkthrough on how to make Tensorflow work in WSL2 and detect my graphics card(GPU).

I had tried official Tensorflow in WSL2 installation here. And it didn’t work for me. In addition I have seen many people had same problem and ask in internet.

After pulling my hair and research for few days, I have made it works. At least in my case. The whole point in this walkthrough is to enable Tensorflow to detect my graphics card.

The reason to use WSL2 instead of Windows is because Tensorflow support GPU for Windows stopped at version 2.10 and any version after that will require you to use WSL2. This is what they said officially.

Caution: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install TensorFlow in WSL2, or install tensorflow or tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin

My PC specs

Hardware

  • CPU: Intel(R) Core(TM) i7–8750H CPU @ 2.20GHz 2.21 GHz
  • RAM: 16.0 GB
  • Graphic Card: NVIDIA GeForce GTX 1060 with Max-Q Design(6GB)

OS

  • Windows 10 pro — 22H2–19045.4291

Before it start

Make sure you update your Windows. Windows 10 is required and at least OS build 19044 or higher. It was written on official website.

Note: TensorFlow with GPU access is supported for WSL2 on Windows 10 19044 or higher. This corresponds to Windows 10 version 21H2, the November 2021 update. You can get the latest update from here: Download Windows 10. For instructions, see Install WSL2 and NVIDIA’s setup docs for CUDA in WSL.

Make sure you have NVIDIA dedicate graphics card. You can check if your graphics card is supported here.

Make sure your NVIDIA dedicate graphics card is up-to-date. You can check it by running nvidia-smi in command-line on Windows.

WSL and Preparation

WSL stand for Windows Subsystem for Linux. In simple explanation, it is a Linux operating system under Windows.

To enable WSL on Windows 10/11 see here.

Here I am going to install Ubuntu distribution. According to this document here. You must be running Windows 10 version 2004 and higher (Build 19041 and higher).

First launch either PowerShell or Command-line in Windows with Administrator.

Update WSL wsl --update then wsl --install this will install latest Ubuntu distribution. The reset is following setup instruction.

To start Ubuntu run wsl then it will start Ubuntu and you can start operate system with Linux commands. To exit run exit then you will back to Windows command-line. To shutdown wsl --shutdown .

The default working directory every time you start Ubuntu is /mnt/c/Users/[your Ubuntu username] where mnt is mounting point, c is C partition on disk, Users is Users folder on Windows.

In fact, this is the same directory of your user working directory on Windows. If you run ls On Ubuntu to display a list of files and folders, they are as same as the one on Windows C:\Users\[username]\. It also means that creating or deleting file either in Ubuntu or Windows will effect another system.

To test it run mkdir test which create an empty folder. Now you can find it under C:\Users\[username]\ on Windows. Delete it on Windows then run ls on Ubuntu. The folder is gone.

Personally I would like to work in home directory on Ubuntu. Every time I started Ubuntu I will go to home directory by running cd ~/ this will put you under /home/[username]/ and with it I can sure that anything I do is only on Ubuntu not on Windows.

Tensorflow GPU on WSL2-Ubuntu

From now on we will work on Ubuntu with Linux commands. Therefore make sure you run wsl start Ubuntu on Windows command-line. Also make sure you are at home directory by running cd ~/ on Ubuntu.

To see which directory are you in, you run pwd.

We will go through.

  • Python3 and pip installation.
  • Create working directory and environment.
  • Install Tensorflow.
  • Install CUDA Toolkit.
  • Verify CUDA installation.
  • Install cuDNN.
  • Solve last puzzle and enable Tensorflow to detect GPU.

Install Python3 and pip

Run

sudo apt-get update && sudo apt-get install python3 python3-pip

. This will install Python3 programming language and pip a Python package manager.

Create working directory & environment

Run

mkdir tftest && cd tftest

Now we will work in this directory.

For creating environment I use virtualenv. Run

python3 -m pip install virtualenv

To create an environment in current directory

python3 -m virtualenv .venv

which create a folder name .venv and all future pip installed libraries will be in that folder.

In order to work in the environment we need to activate it. To activate environment source .venv/bin/activate . To deactivate environment deactivate.

Why environment? Instead of installing libraries in operating system, we install libraries in an isolated environment and the benefit of it is to not effect operating system. If something went wrong in the environment, operating system will not be damaged and all we need to do is to delete environment and recreate it again.

Install Tensorflow

Base on document, to install latest Tensorflow that support GPU for WSL2 pip install tensorflow[and-cuda] while you are in the environment.

Now it is important. Let’s see this table. You see the latest version is tensorflow-2.16.1 while I am writing this walkthrough. Next make sure our python3 version is in 3.9–3.12 by running python3 --version . Finally remember cuDNN and CUDA version. For me it requires both 8.9 and 12.3.

Install CUDA Toolkit

For CUDA. We will install version 12.3.

According to the setup document for CUDA Toolkit at section 3 option 1 we can download CUDA Toolkit from here. However at this point I am writing this walkthrough it take me to version 12.4 which is not what I want.

To get the correct version, scroll down to Resource section at bottom of page and click Archive of Previous CUDA Releases. Find and click CUDA Toolkit 12.3.0. Select target platform.

  • Operating System : Linux.
  • Architecture : x86_64.
  • Distribution : WSL-Ubuntu not Ubuntu. WSL-Ubuntun do not include NVIDIA driver whereas Ubuntu included NVIDIA drive. And in this user guide at section 2.1 it says Do not install any Linux display driver in WSL.
  • Version : 2.0
  • Installer Type : Select any one you like.

Following installation instruction it present below to install CUDA on WSL2.

CUDA location can be found at /usr/local/cuda-12.3 .

Verify CUDA

To see if CUDA is installed, we can run nvcc -V. nvcc is located at /usr/local/cuda-12.3/bin . Running the command will prompt.

Not found? huh…?

That is because current bash shell’s PATH variable doesn’t included nvcc . Why don’t we include bin folder under cuda-12.3 to the PATH.

To include it to PATH we can run

export PATH=/usr/local/cuda-12.3/bin/:$PATH

Let’s run nvcc -V again.

The PATH variable is just a special variable that contains all of the directories that are automatically searched when we try to call a program.

One thing be notice here

export PATH=/usr/local/cuda-12.3/bin/:$PATH

is temporarily and this setting will disappear once you exit Ubuntu. In order to make it permanent, we can write it to .bashrc file. Run

vim ~/.bashrc

and press i on keyboard to edit. add this

 export PATH=/usr/local/cuda-12.3/bin/:$PATH

at end of .bashrc file. Press ESC on keyboard exit edit mode and press shift + : on keyboard then type wq and press Enter.

After it was written into .bashrc, you need to reload .bashrc

source ~/.bashrc

. bashrc (short for bash read command) is a configuration file for the Bash shell environment. Every time an interactive Bash shell session starts, the . bashrc script file executes.

Install cuDNN

For cuDNN. We will install version 8.9.

The NVIDIA CUDA® Deep Neural Network (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, attention, matmul, pooling, and normalization.

The first instinct is to go to official cuDNN website and download it then follow installing instruction. This will not work at least in my case.

To properly install cuDNN, you need to download it from here or at cuDNN download page scroll down to bottom at Resource section click Tarball and Zip Archive Deliverables. Then cudnn/ follow by linux-x86_64/.

Find the one with name cudnn-linux-x86_64–8.9.0.131_cuda12-archive.tar.xz and download tar file with

wget https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-x86_64/cudnn-linux-x86_64-8.9.0.131_cuda12-archive.tar.xz

Next run

tar -xvf cudnn-linux-x86_64-8.9.0.131_cuda12-archive.tar.xz

to extract file at currently working directory.

Copy every header files inside include folder under extracted cuDNN folder to /usr/local/cuda-12.3/targets/x86_64-linux/include/ by running this command

sudo cp cudnn-linux-x86_64-8.9.0.131_cuda12-archive/include/cudnn*.h /usr/local/cuda-12.3/targets/x86_64-linux/include/

Also copy every library files inside lib folder under extracted cuDNN folder to /usr/local/cuda-12.3/targets/x86_64-linux/lib/ by running this command

sudo cp cudnn-linux-x86_64-8.9.0.131_cuda12-archive/lib/libcudnn* /usr/local/cuda-12.3/targets/x86_64-linux/lib/

Change privilege for header and library files

sudo chmod a+r /usr/local/cuda-12.3/targets/x86_64-linux/include/cudnn* /usr/local/cuda-12.3/targets/x86_64-linux/lib/libcudnn*

If you want to know chmod a+r means see here.

The missing piece in puzzle

At this point, it seems ready to rock. Run

python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

If Tensorflow detect GPU then it will show up in list at last line. Unfortunately nothing show up. At last few lines I notice message Cannot dlopen some GPU libraries and Skipping registering GPU devices. What libraries?

The one library is lib64 which is in inside CUDA folder and located at /usr/local/cuda-12.3/.

To fix this issue, we run

export LD_LIBRARY_PATH=/usr/local/cuda-12.3/lib64:$LD_LIBRARY_PATH

which add lib64 to LD_LIBRARY_PATH variable.

LD_LIBRARY_PATH tells the dynamic link loader (ld. so — this little program that starts all your applications) where to search for the dynamic shared libraries an application was linked against. Multiple directories can be listed, separated by a colon (:), and this list is then searched before the compiled-in search path(s), and the standard locations (typically /lib, /usr/lib, …).

One thing be notice here

export LD_LIBRARY_PATH=/usr/local/cuda-12.3/lib64:$LD_LIBRARY_PATH

is temporarily and this setting will disappear once you exit Ubuntu. In order to make it permanent, we can write it to .bashrc file. Run

vim ~/.bashrc

and press i on keyboard to edit. add this

export LD_LIBRARY_PATH=/usr/local/cuda-12.3/lib64:$LD_LIBRARY_PATH

at end of .bashrc file. Press ESC on keyboard exit edit mode and press shift + : on keyboard then type wq and press Enter.

After it was written into .bashrc, you need to reload .bashrc

source ~/.bashrc

Let’s run

python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

again.

Finally Tenosrflow is able to detect my GPU.

WSL2 import & export & unregister

This is not part of this walkthrough but I use this 2 commands during experiment. You can tell WSL2 to export current Ubuntu to tar file at a location on Windows and then import it later. With these features you can snapshot your Ubuntu and restore it.

Export your current Ubuntu as a distribution with name Ubuntu.

wsl --export Ubuntu "export to directory incldue filename with tar extension"

Unregister/Uninstall current Ubuntu.

wsl --shutdown
wsl --unregister Ubuntu

Import distribution.

wsl --import Ubuntu "import to directory" "tar filename"

Finally you can set username as default user for imported distribution. Username that you typed during first time WSL2 Ubuntu installing.

Ubuntu config --default-user [username]

PyTorch is easier

With PyTorch , all I have to do is

pip3 install torch torchvision torchaudio

Let’s verify it if GPU is working.

python3 -c "import torch;print(torch.cuda.is_available())"

I don’t even need those tedious works in Tensorflow.

The PyTorch binaries ship with all CUDA runtime dependencies and you don’t need to locally install a CUDA toolkit or cuDNN

Conclusion

There is quite a bit of works to do in order to setup Tensorflow with GPU for WSL2. At this time of writing this walkthrough, official installation here did not explain clearly. Even we fix the issue, those tedious works is very painful for just to enable Tensorflow to detect GPU.

--

--