TensorFlow with GPU in Ubuntu

I would be outlining the steps I followed to set up TensorFlow on my Ubuntu 15.10 system with GPU support. TensorFlow is an Open-Source library developed by Google for designing neural network paradigms. It uses data flow graphs where the graph edges denote the tensors flowing from node to node. The nodes denote the mathematical operations on the data flowing through the graph.

There are two broad versions of TensorFlow. One that runs on your CPU and the other on the GPU. With general purpose computing on GPUs becoming quite commonplace, TensorFlow utilises the GPU to perform accelerated computing. Right now TensorFlow only works with CUDA, which is a parallel computing API developed by Nvidia. There is ongoing work on getting TensorFlow work with OpenCL, the Open Source equivalent of CUDA. You can follow the updates on the same here.

So, if you have an Nvidia GPU, installing the TensorFlow GPU version would be optimal. A GPU of >4GB would be ideal for heavy Deep Learning Applications (Mine is just 2GB. *sigh*) I have tried to aggregate all the steps I followed to get this done here.

Default installations of Linux distributions like Ubuntu might not have the Nvidia GPU drivers installed. So one needs to install the required drivers and get it enabled. This is a very tricky process as if it isn’t done right, one can potentially screw up the entire display of your system (Happened to me the first time I attempted to get this done!). I have listed the steps to be followed next.

Enabling Nvidia GPU in Ubuntu 15.10

First, before installing the driver purge any random nvidia drivers or associated software (like bumblebee) if they are installed already using:

sudo apt-get purge nvidia* bumblebee

There is also an open source graphics driver named nouveau that Ubuntu installs by default. Nouveau often creates issues, make sure that nouveau is also blacklisted(A quick google search should list out the steps for the same).

This is a very good guide on how to enable nvidia GPUs in an Ubuntu system. However in addition to the steps in the guide, install nvidia-prime using sudo apt-get install nvidia-prime. Nvidia prime helps you to control the GPU states. Also remember that these are major system changes and require timely system reboots for it to come into effect(sudo reboot).

Installing CUDA

I decided to install CUDA 7.5. CUDA has tons of prerequisites. However, if you are a software developer, you probably already have all these prerequisites installed. List of prerequisites can be found here.

CUDA 7.5 can be downloaded from here. Feed in the Operating System, Architecture, Distribution, Version, Installer Type and then you can download the installation files along with detailed instructions on how to install.

Install cuDNN

cuDNN is the library that works with CUDA and has primitives for deep neural networks. Sign up for the Nvidia Accelerated Computing Developer Program and download the cuDNN files. I downloaded cuDNN 5.0 files. Extract the files and copy them into the relevant folders of the CUDA library using the following steps.

tar -zxf cudnn-7.5-linux-x64-v5.0-ga.tgz

cd cuda

sudo cp lib64/* /usr/local/cuda/lib64/

sudo cp include/* /usr/local/cuda/include/

Setting up TensorFlow

If your system has pip installed, it is an easy task. If not, install pip using

sudo apt-get install python-pip python-dev

After that, run-

export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.9.0-cp27-none-linux_x86_64.whl

This sets the url where the binaries reside. It changes from time to time, so make sure you use the appropriate url. Next, run-

sudo pip install — upgrade $TF_BINARY_URL

Finally, Checking that all is well with the installation.

Set the environment variables.

export LD_LIBRARY_PATH=/usr/local/cuda/lib64
export CUDA_HOME=/usr/local/cuda

Go to Python terminal. type -

import tensorflow

You should see this!

Yep that is it! TensorFlow is up and running with GPU support. Now, in the next blog post I would be covering how to use this for actually performing Deep Learning.

PS: Feedback on any sections of this post is more than welcome!