How to run Jupyter Notebook (Tensorflow Code) on your GPU on Windows native

Krishang Virmani
4 min readMar 5, 2024
CUDA Support for Tensorflow in Windows 11

Are you tired of waiting endlessly while your machine learning models train? It’s frustrating, right? But fear not! There’s a solution that can significantly speed up the process: your GPU. By leveraging your GPU’s power, you can train your models much faster than with just your CPU.

However, for those of us primarily using TensorFlow over PyTorch, accessing GPU support on a native Windows platform has become increasingly challenging since its discontinuation after version 2.11.

Don’t worry, though; I’ve got you covered. Follow these steps, and you’ll be running your TensorFlow code on your GPU in no time.

NOTE :

  1. Beware of the versions of different products to be installed. Incompatible versions gives multiple different errors which might even look unrelated but doesn’t lead to successful setup.
  2. You can possibly use different versions of all the components listed but the following set of versions is the one which I used and was able to successfully configure hence explicitly mentioning the versions used.

Environment Details:

Operating System : Windows 11 Home

Graphics Card: NVIDIA GPU RTX-4060

References:

  1. https://www.tensorflow.org/install/pip
  2. https://stackoverflow.com/questions/74926403/how-to-set-up-tensorflow-gpu-on-windows-11

Steps:

  1. Install Anaconda: Go to Anaconda Downloader, download the installer for your specific Operating System, and run it. Then, open Anaconda Navigator from your Start menu.
Installation Page for Anaconda
Anaconda Navigator in Start Menu

2. Create a New Environment: Open Anaconda Command Prompt and create a new environment. Let’s call it “pygpu” for this example.

conda create -n pygpu python=3.10

3. Activate the Environment: After creating the environment, you’ll need to activate it to ensure that any installations or operations take effect within this new environment, rather than affecting your default environment. To activate the “pygpu” environment, use the following command:

conda activate pygpu

This command switches the current environment to “pygpu”, allowing you to install packages and execute commands within this environment.

4. Install CUDA Toolkit and cuDNN: (CUDA Deep Neural Network) These are crucial for GPU support. Install them with:

conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0

Installing these specific versions is incredibly important as they need to be compatible with whatever version of tensorflow we download moving forward

5. Install TensorFlow: Now, install TensorFlow version 2.10 to ensure compatibility with CUDA Toolkit and cuDNN:

python -m pip install tensorflow==2.10

Verification

Once you’ve completed all the aforementioned steps, you’ll need to verify whether your GPU is identified by TensorFlow. Follow these steps:

1. Open Anaconda Navigator and launch Jupyter Notebook.

2. Ensure that you’re operating within the “pygpu” environment.

3. Utilize the following code snippet in a Jupyter Notebook cell:

For checking Identification and Availability of the GPU

If the output for the above code snippet returns `True`, it indicates that your GPU has been successfully identified and is available for computations.

Once your GPU is ready for computation, you might want to confirm that your code is indeed being executed on the GPU rather than the CPU. For this purpose, you can utilize the following code snippet:

To ensure the code writen runs on the GPU

The statement `tf.debugging.set_log_device_placement(True)` activates device placement logging, which means TensorFlow will print information about where each operation is assigned to execute. This information can help you confirm whether your operations are being executed on the GPU or CPU.

If the output indicates “gpu:0,” as demonstrated in the provided example, it means that your TensorFlow code within this environment will now execute on the GPU instead of the CPU. This enables you to experience faster convergence speeds for your models.

Additionally, I encountered an error related to a specific part of the requests library not functioning correctly. This issue could have arisen due to the use of older versions. Specifically, the chardet package was causing problems in my case. To address this, I had to forcibly reinstall the chardet package using the following command:

pip install — upgrade — force-reinstall chardet

Conclusion

By following these steps, you can unlock the full potential of your GPU for faster model training and improved performance in TensorFlow. Overcoming obstacles such as accessing GPU support on a native Windows platform and resolving compatibility issues ensures a smoother setup process.

With your code now optimized to run on the GPU, you’re poised to tackle complex machine learning tasks with greater efficiency and speed. So, dive in, experiment, and unleash the power of machine learning on your local machine’s GPU.

Happy coding!

--

--