Video Version: https://youtu.be/Tiq_vea5Eqg
If you’re running into this error when using OnnxRuntime in Python, it’s because you’re missing a Cuda dependency or something,
ImportError: cannot import name 'get_all_providers' from 'onnxruntime.capi._pybind_state
The OnnxRuntime doesn’t make it super explicit, but to run OnnxRuntime on the GPu you need to have already installed the Cuda Toolkit and the CuDNN library.
First check your machine and make sure you have a Cuda Enabled Card. If you have a pretty recent NVIDIA card you are probably good, but go to the NVIDIA website and check the compatibility tables and make sure if needed.
The Windows Device Manager can be opened via the following steps:
Open a run window from the Start Menu
control /name Microsoft.DeviceManager
Check under display adaptors to see your card
First install the Cuda Toolkit:
It opens a Window that looks like this:
Verify your installation on the command line:
$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Feb_14_22:08:44_Pacific_Standard_Time_2021
Cuda compilation tools, release 11.2, V11.2.152
Now Install cuDNN 18.104.22.168
Download the zip and extract it
Copy the following files into the CUDA Toolkit directory.
copy cuda\bin\cudnn*.dll to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vx.x\bin.
Copy cuda\include\cudnn*.h to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vx.x\include.
Copy cuda\lib\x64\cudnn*.lib to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vx.x\lib\x64.
ok, now with all the prereqs out of the way you can do:
pip install onnxruntime
pip install onnxruntime-gpu