Bootstrapping Tensorflow on Ubuntu 16.04 (Xenial) on an Amazon GPU Instance

Vincent Chu
1 min readFeb 8, 2017

--

Update (3/17/2017): recipe updated for Tensorflow v1.0

I’ve been spending some time playing around with Tensorflow and the Inception family of deep learning architectures. Because of the size and complexity of these architectures, my personal laptop was woefully underpowered for any real training.

Luckily, Amazon offers three types of GPU-enabled compute instances. While they aren’t cheap, they offer significant improvements in speed for deep learning related tasks.

Unfortunately, building an AMI with Tensorflow (compiled with NVDIA’s CUDA/cuDNN libraries) wasn’t straightforward, requiring me to read through a ton of different posts and Dockerfiles. For all posterity, I’ve posted my recipe here:

If you’re successful, you should be able to see the CUDA library load successfully when you import tensorflow in a python REPL:

>>> import tensorflow
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally

Happy training! My own informal benchmarks show a >40x improvement in speed.

--

--

Vincent Chu

Current: Partner at @initializedcap. Former: @ClaraLending, @twitter, @posterous. Studied Physics and Mathematics at Stanford and Harvard.