Commands for the Cross-validation of PyTorch and CUDA/cuDNN Installation

Lynn
2 min readSep 5, 2024

--

A summation of simple Python codes for cross-validating the installation status of the CUDA version of PyTorch.

Photo by Mahmoud Fawzy on Unsplash

PyTorch installed via pip (or conda) typically includes CUDA Toolkit (Runtime) and cuDNN, as long as you install a version that supports GPU with CUDA. To evaluate whether PyTorch with CUDA is successfully installed and properly configured on your machine, you can follow these steps:

1. Check PyTorch Installation

First, make sure PyTorch is installed and accessible in your environment. You can check the PyTorch version with the following command:

import torch
print(torch.__version__)

If this runs without any errors and returns a version number, then PyTorch is successfully installed.

2. Check CUDA Availability

To verify if PyTorch can detect CUDA and utilize your GPU, use the following commands:

import torch
print(torch.cuda.is_available())
  • If this returns `True`, it means PyTorch can access CUDA, and it should be able to run on your GPU.
  • If it returns `False`, it could mean CUDA is either not installed properly, not supported by your system, or PyTorch wasn’t installed with CUDA support.

3. Check GPU Device Count

To ensure PyTorch can see your GPU(s), you can check how many GPU devices are detected:

print(torch.cuda.device_count())

This should return the number of available GPUs. If it returns `0`, then PyTorch cannot detect any GPUs.

4. Check the Current GPU Device

You can also check which GPU PyTorch is using by default with:

print(torch.cuda.current_device())

And to get the name of the GPU:

print(torch.cuda.get_device_name(torch.cuda.current_device()))

This will display the name of the GPU being used by PyTorch.

5. Run a Test Tensor on the GPU

To further ensure that PyTorch can use the GPU for computations, try moving a tensor to the GPU and performing an operation:

# Check if CUDA is available, and if so, create a tensor on GPU
if torch.cuda.is_available():
x = torch.randn(3, 3)
x = x.to('cuda')
print(x)

If this works without errors and returns a tensor located on the GPU, then PyTorch and CUDA are correctly configured.

6. Verify CUDA Version

If you want to check which CUDA version PyTorch is using, run:

print(torch.version.cuda)

This will print the CUDA version that PyTorch was compiled with. Make sure this version matches the version of the CUDA toolkit installed on your system (if installed globally).

7. Check cuDNN Availability

print(torch.backends.cudnn.enabled)  # Check if cuDNN is enabled
print(torch.backends.cudnn.version()) # Print the cuDNN version used by PyTorch

By following these steps, you should be able to confirm whether your PyTorch installation with CUDA is functioning correctly and using the GPU for computations.

--

--