Setting up your GPU TensorFlow platform

Manuel Sánchez Hernández
3 min readJun 11, 2017

--

If you want to install TensorFlow with GPU currently you have two choices: either do your own manual install (good luck with that) or use a Docker image. If you install the TensorFlow GPU docker image, it is almost plug and play: you can start coding, almost immediately, in a Jupyter notebook with all TensorFlow GPU libraries installed. But what if you want to use your own libraries on top of it? And how do you access your own files?

In this post I will explain exactly that. I assume the reader has Docker already installed and knows the very basics of Docker (for example knows Part I and Part II of the Docker Get Started).

Docker has several advantages: it is portable, allows for version control and reproducibility and it is very efficient (more than a virtual machine). However, it requires some setup which is not straightforward.

Once you have installed the Docker image for GPU, following this intro, you would like to be able to run the Docker image in bash mode, not the Jupyter, with access to your files and with access to the corresponding ports. Inside that image, you would like to install your own libraries, and be able to recover it for the future, as Dockers are ephemeral and isolated. I will explain how to do all that.

Opening the TensorFlow GPU Docker in bash shell, opening the right ports, and enabling it to read and write in a directory

Easy. You just need to write the following:

nvidia-docker run -it -p 8888:8888 -p 6006:6006 -v <your local folder>:<corresponding Docker Folder> tensorflow/tensorflow:latest-gpu /bin/bash

Where:

  • The options “-it” and “/bin/bash” are needed to execute the shell, instead of the Jupyter notebook. This will allow you to install any library.
  • “-p 8888:8888” and “-p 6006:6006” allows you to open these two ports, for a jupyter notebook and for TensorBoard.
  • “-v <your local folder>:<corresponding Docker Folder>” allows the Docker image to read and write in one of your local folders (for example: “/home/ubuntu/code”). This folder will appear in the docker image as the <corresponding Docker folder> (for example, “/code”).
  • “tensorflow/tensorflow:latest-gpu” is the image name that TensorFlow provdes, which enables GPU.

Now that you are in the bash, you should be able to install any library. If you would like to run several screens inside the docker, I recommend tmux, which is the equivalent to screen, but with steroids.

How to save your modified Docker image

Once you exit the Docker image (just type “exit” in the shell), you can commit the changes done. Just run

docker ps –a

And you will get the list of dockers recently run. Take the “Container ID” of the one you want to run, usually the latest one, and type:

docker commit <container_id> <image_name>

Where <container_id> is the container identifier from the previous command, and <image_name> is the new name that you want to give to your docker. To run it again, you just need to type the previous command:

nvidia-docker run -it -p 8888:8888 -p 6006:6006 -v <your local folder>:<corresponding Docker Folder> <image_name> /bin/bash

And you are done!

--

--