GCP and Fast Ai v1: A full setup that’ll work

You get a fully functional GPU environment to use fastai to teach yourself Deep Learning. The dependencies in this post took some digging around.

Given this is not an exercise in eloquence, self reflection and writing ability, I am going to be terse and jump right to the chase.

This post directs you towards setting up a fully functioning Google Cloud Project Instance with a GPU running to support your fast ai exercises.

What you get:

A Virtual Machine running on a Tesla K80 GPU backbone to run Fast.ai V1 (not course v3 which you can setup quickly on Colab but not save stuff/environments etc). You get $300 free credits to get you going.

Assumptions:

  • You know how to set up the gCloud SDK.
  • You know how to create a GCP billing account till the point you can start creating a Virtual Machine.
  • You know what terms like conda, virtual environment, GPUs, Firewall, SSH mean.

We go from Tile 0 :

1. Set up a new GPU Instance

Setup Billing on GCP for a Google project so you can start a new Virtual Machine. This is a simple VM, not the ready to use FastAi VM

You new VM setup :

You can use any other zone as long as you get a GPU.

Important : Tick the HTTP/S checkboxes before clicking “Create”

Your quota might be 0, hence you might not get a startable engine. (red exclamation instead of green tick next to your instance name).

Request extra quota for Fastai here.

You should now have a fresh new VM to exploit.

2. Log in to your VM from your local.

On your local console: SSH to your instance (I am hoping you have the gcloud SDK setup)

$ gcloud init to your project and zone
$ gcloud compute ssh <name of instance>

Once you are in your VM from your local, the actual tasks begin.

3. Setting up your VM

Yours is an Ubuntu instance, run the following on your terminal

sudo apt-get update
sudo apt-get install bzip2 git libxml2-dev

Set up Conda:

$ wget https://repo.continuum.io/archive/Anaconda3-5.0.1-Linux-x86_64.sh
$ bash Anaconda3–5.0.1-Linux-x86_64.sh
$ rm Anaconda3–5.0.1-Linux-x86_64.sh
$ source .bashrc

Download Fastai to your VM:

git clone https://github.com/fastai/fastai.git

Once it has downloaded get into the fastai directory

cd fastai

Install fastai dependencies into a new conda virtual environment:

conda env create -f environment.yml

This creates a virtual env called fastai. Activate it.

source activate fastai

Setting up jupyter on your VM to run well. You need to invoke and edit the config file.

jupyter notebook — generate-config
jupyter notebook password

You also need to setup a password for your notebook env, as you see above. Next you edit the config.

vim .jupyter/jupyter_notebook_config.py

Your VIM editor opens the config file within your terminal. Here’s what you add to it (to edit : Enter ‘I’ , type what you want, Click Escape, to exit, enter ‘:wq’)

Add the following to the config:

c.NotebookApp.ip = ‘0.0.0.0’

Now you setup a firewall rule on your GCP VM to allow these settings:

On you GCP list of VMs open up the network settings:

Next from the left panel select:

Create a new Firewall rule with the following settings :

Try and run Jupyter from your local. Run on the port 8888 or add other tcp permits on your firewall.

jupyter notebook — port:8888

On your browser, run

<Your VM’s external IP>:8888

This should open up the jupyter window on your browser and ask for the password you entered earlier.

Good. Now install your CUDA drivers to your VM.

Back on your local logged into your VM (In case your jupyter is still running, re log in to the VM on another terminal tab kill the jupyter process running on 8888), run this.

curl -O http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1604_8.0.61–1_amd64.deb
sudo apt-get update
rm cuda-repo-ubuntu1604_8.0.61–1_amd64.deb
sudo apt-get install cuda-8–0

Re launch jupyter on your local. On your browser got to:

<Your external IP>:8888

Should be done.

In your browser manoeuvre to any of the fastai courses > lecture and start using. To check if your GPU is working fine, after importing fastai modules (so that you have imported torch effectively), run

torch.cuda.is_available()

This should give True.