The Simple Guide: Deep Learning with RTX 3090 (CUDA, cuDNN, Tensorflow, Keras, PyTorch)
This tutorial is tested with RTX3090. All the commands in this tutorial will be done inside the “terminal”.
Purpose: Identify the proper drivers and software versions for getting the RTX3090 (or any RTX30-series) GPU working
Full Deep Learning Installation guide: https://medium.com/@dun.chwong/the-ultimate-guide-ubuntu-18-04-37bae511efb0
1. Nvidia Driver
The RTX30-series has the Ampere architecture, therefore it will only work with Driver 450+ versions only.
2. Nvidia CUDA
Since we are now restricted by the driver version, we can only go for CUDA 11.0+.
3. Nvidia cuDnn
With CUDA 11.0+, we can only use cuDnn 8.0+.
cuDnn8.0 Improvements
3. Python
Spoiler alert: you will need to use tensorflow 2.5
Given the spoiler, you need to use Python3.8+
sudo apt update
sudo apt install software-properties-common
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt install python3.8
To work and code in Python3.8, it’s recommended that you create a new virtual environment (see my full installation guide, up top).
4. Tensorflow
There are currently 3 options to get tensorflow without with CUDA 11:
- Use the nightly version
pip install tf-nightly-gpu==2.5.0.dev20201028
2. “pip install” with one of the wheels from this repo
3. build from source (this is the safest implementation, but could get messy)
5. Pytorch
Pytorch actually released a new stable version 1.7.0 one day before I started writing this article, and it is now officially supporting CUDA 11
pip install torch==1.7.0+cu110 torchvision==0.8.1+cu110 torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
If that didn’t work, please checkout these two links