Running TensorFlow with Docker and GPU
Georgy Georgiev

Great writeup. I’d just like to point out that this is under so much development that your information is already somewhat out of date. You can now skip the first step and use


without running (which has been removed).

Unfortunately since Nvidia forces users to signup for their newsletter before downloading CUDNN, installing that is still required. I would also note that in some cases this is easier than copying CUDNN into the /nvidia folders:

export LD_LIBRARY_PATH=<path to cudnn>/include:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=<path to cudnn>/lib64:$LD_LIBRARY_PATH

Like what you read? Give Graydyn Young a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.