The ultimate guide on installing PyTorch with CUDA support in all possible ways

→ Using Pip, Conda, Poetry, Docker, or directly on the system

Paul Iusztin
Decoding ML

--

Image by DALL-E

We all know that one of the most annoying things in Deep Learning is installing PyTorch with CUDA support.

Nowadays, installing PyTorch & CUDA using pip or conda is relatively easy. Unfortunately, many production projects require you to use Poetry or Docker. That is where things get more complicated.

That is why I am writing this article as a practical living document showing how to install these 2 beasts in all possible ways.

This tutorial is a living document that I plan to use to install PyTorch & CUDA myself. Thus, I will update this doc whenever I test something I like. Also, in the comments section, feel free to add any other methods you use to install torch & CUDA or troubleshoot potential issues. Let’s create the go-to document that makes installing PyTorch & CUDA a piece of cake!

Important observation: I am mainly using Ubuntu. Thus, I will use concrete examples based on it. But this article can easily be extrapolated to other operating systems.

Another important observation: I have used Python 3.10, torch 2.0.1 and CUDA 11.8 in most examples. Feel free to change it with…

--

--

Paul Iusztin
Decoding ML

Senior ML & MLOps Engineer • Founder @ Decoding ML ~ Content about building production-grade ML/AI systems • DML Newsletter: https://decodingml.substack.com