Installing TensorFlow with GPU support on Windows WSL in 2022

Memoona Tahira
5 min readNov 16, 2022

--

TensorFlow is phasing out GPU support for native Windows. Now, to use TensorFlow on GPU you’ll need to install it via WSL. This is the rather ominous notice on the TensorFlow website:

Caution: The current TensorFlow version, 2.10, is the last TensorFlow release that will support GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install TensorFlow in WSL2, or install tensorflow_cpu and, optionally, try the TensorFlow-DirectML-Plugin

WSL can a be great way to jump into python development without having to dual boot Windows with a Linux distribution (most commonly, Ubuntu), but the RAM for WSL is capped at 50% of total system RAM. This can be changed in the WSL config file, but you would still need to have enough RAM to run both WSL and regular Windows smoothly. An alternate option is consider cloud vendors for running TensorFlow on a GPU enabled environment (e.g Colab, Amazon SageMaker Studio-Lab, Kaggle etc).

Besides using WSL, the other requirement for installing TensorFlow is to install it via pip in an Anaconda/Miniconda environment.

The whole installation process will look like this:

  1. Make sure you are on Windows 10/11. In case of Windows 10, make sure you have the latest updates.
  2. Install Nvidia CUDA drivers for Windows.
  3. Install WSL. No additional drivers are required to be installed inside WSL. The Windows Nvidia drivers are the only ones needed.
  4. Run the following commands inside a WSL terminal:

Note: Main steps taken from TensorFlow installation guide here: https://www.tensorflow.org/install/pip#windows-wsl2. However, we will use pip installed with conda rather than pip installed with sudo apt install python3-pip globally in WSL Ubuntu.

# install miniconda:
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -o Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh
source ~/.bashrc

# create and activate a conda environment with python 3.10 as it is
# the latest supported python version for TensorFlow:
conda create -n tf_env python=3.10
conda activate tf_env

# install cudatoolkit and cudnn
# (no need to specify their versions, these will be downloaded based
# on you current nvidia driver, but if you run into trouble,
# you can use versions specified in the TensorFlow guide):
conda install -c conda-forge cudatoolkit cudnn

# In case your tensorflow keeps crashing, reinstall cudatoolkit
# and cudnn using:
conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1

# Add cudatoolkit and cudnn to path permanently:
mkdir -p $CONDA_PREFIX/etc/conda/activate.d
echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/' > $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh


# install pip inside conda:
conda install pip

# Confirm pip is coming from conda:
which pip
pip --version

# install tensorflow
pip install tensorflow

# Verify the install uses GPU (this command should return a list of GPU devices):
python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

Some explanation on why we use pip installed with conda: This way, inside an activated conda environment, conda will use it’s own pip, and not Ubuntu system pip. Any package installed via pip when a conda environment is activated will stay isolated within that environment, and won’t be visible globally, and this is the main point of using a virtual environment like conda. You can check this using like we did before:

# Confirm pip is coming from conda:
which pip
pip --version

Also, using pip installed via conda is important if you want conda to recognize packages and their dependencies installed via pip. Otherwise, conda will be unaware of packages installed with pip, and in the future pip and conda can end up installing packages in a competing way, and this can create conflicts that can ultimately break the environment.

You can check that conda is aware of packages installed with pip by running conda list. You should see packages installed with conda as well as PyPi, the default package source for pip.

Dissecting results of the last command :

python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

2022-11-17 04:06:39.309022: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-17 04:06:39.522955: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-11-17 04:06:40.105630: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/home/mona/miniconda3/envs/tf_env/lib/:/home/mona/miniconda3/envs/tf_env/lib/:/home/mona/miniconda3/envs/tf_env/lib/:/home/mona/miniconda3/envs/tf_env/lib/:/home/mona/miniconda3/envs/tf_env/lib/
2022-11-17 04:06:40.105778: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/home/mona/miniconda3/envs/tf_env/lib/:/home/mona/miniconda3/envs/tf_env/lib/:/home/mona/miniconda3/envs/tf_env/lib/:/home/mona/miniconda3/envs/tf_env/lib/:/home/mona/miniconda3/envs/tf_env/lib/
2022-11-17 04:06:40.105836: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2022-11-17 04:06:41.150890: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:966] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-11-17 04:06:41.283621: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:966] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-11-17 04:06:41.283717: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:966] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

The first is an info message that the tensorflow binary we are using is optimized which is good. The rest of the messages are one error message about cuBLAS, 3 warning messages about TensorRT and 2 info messages about NUMA. According to this GitHub issue, the CuBLAS library error will be fixed in tensorflow 2.11, whereas the plugins for TensorRT can be separately installed. The info message about NUMA nodes is also harmless, and won’t affect performance. None of these are really critical for most deep learning tasks. We finally get our confirmation message that the GPU is visible to TensorFlow:

[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

So, all is good.

Final overview:

At the time of writing, this is how my installation ended up looking like using Miniconda in WSL:

Windows version: Windows 11, version 22H2

Nvidia Driver version: Driver Version: 526.47

WSL version: WSL 2 with Ubuntu 22.04

Cudatoolkit and CuDNN version installed by default in the conda environment:

cudatoolkit-11.7.0 and cudnn-8.4.1.50

pip version in conda environment: 22.3.1

Python version in conda environment: 3.10.6

TensorFlow version via pip: 2.10.1

In case of any trouble, don’t forget to double check the official TensorFlow instructions before heading out to Google. May your TensorFlow installations be forever smooth and happy learning!

--

--