Setting Up a Google Cloud Instance GPU for fast.ai for Free

James Lee
10 min readDec 13, 2017

--

EDIT* This guide was written for fastai version 1, which at the current date and time (Jan 2018) is in the midst of transitioning to the a newer version, dubbed fastai v2. An updated guide will be coming soon.

As a deep learning enthusiast in Malaysia, one of the biggest issues I have is securing a cheap GPU option to run my models on. If you’re like me and come from a country where paying $80-$100 every month for AWS GPUs is too expensive, I’ll show you how I set up a my GPU on GCP without incurring any cost at all.

In this article I’ll walk you through setting up a google cloud computing instance with a 500gb SSD, a 3.75gb ram Broadwell CPU and a Nvidia Tesla K80 GPU. All of this can be done for free at the start.

I’ll be setting up my instance in the Asia(Taiwan) servers since I’m pretty sick of the high latency I get when using servers in America or Europe, but you can change this on your own later.

Google Cloud Platform

Google Cloud Platform is a cloud computing infrastructure which provides secure, powerful, high-performance and cost-effective frameworks. It’s not just for data analytics and machine learning, but that’s for another time. Check it out over here.

Google is promoting usage of their platform right now and are giving away $300 dollars of credit and 12 months as a free tier user.

Pssss…you get $300 for each google account you have 😉 😉

So what are you waiting for?

Seriously go get them though, you’ll need these credits to continue.

Starting off

To start off you’ll need to upgrade your account to a paid account. Free trial accounts aren’t allowed any quota for GPUs so this is a mandatory step.

Don’t worry you won’t be charged anything as of yet. You’ll only start getting charged once you run out of credits and $300 can last you up to a month if you’re wise about it.

Go to your GCP console here and click on the menu button on the top left.

Look for the option billing and click it. If you haven’t upgraded to a paid account yet, you should see a small button near the top that says upgrade to a paid account. Go ahead and do that.

Creating a Project

Smash that menu button again. This time you’ll be looking for the Compute Engine option instead, smash that too. 😃

Next look for three hexagon-ish dots at the taskbar on top.

Smash it and it should bring up a window that lists all your projects. Hit the + button to add a new one. Name your project and move on.

Regions and Zones

First off you’re gonna have to do a little reading and deciding. You’ll need to figure out which region and zone you want your cloud computer to be at.

Basically each region has a bunch of zones and not all zones offer the same services. Some have SSDs and GPUs, some don’t.

Pay attention to which zones have Broadwell CPUs, as GPUs can only be attached to Broadwell generation or higher CPUs.

This link shows you what each zone offers and this link shows you which zones GPUs are available in.

Since I’m based in Southeast Asia, the nearest region with GPUs is asia-east1-a so I’ll be using that for my set up.

The names of regions and zones are concatenated in GCP. For example in asia-east1-a, the region is asia-east1 and the zone is a. Pretty confusing huh. 😓

GPU Quota

Next you’ll have to request for an increase of our GPU quota. Most projects start with a quota of 0 GPUs available so you’ll have to get on our knees and ask for some love from big daddy G.

Smash that menu button once more and look for IAM Admin. Smash.

Once you’re at the IAM & admin page, look for the option Quotas on the side menu. Smash.

Okay it’s gonna get a little tricky here. You’ll be brought to a page filled things that are all called Google Compute Engine API. It looks like this:

Remember that region-zone you decided on earlier? You’ll need to use it now. Smash the regions drop down and select your region only. You’ll now see the Google Compute Engine APIs for that region only.

Ctrl+F and search for “k80”, check it and smash Edit Quotas on top. Fill up your details, the number of GPUs you want to request for and a justification. I only requested for 1 but you can ask for more if you want. Just be aware that more GPUs will cost more $$$ per hour so be prepared for that. We’re done here, moving on.

Requested for one? Good. It should take under 5 minutes for an email to pop up in your inbox saying your request has been approved. Give thanks to big daddy G.

Creating a VM Instance

Next you’ll have to create the VM instance. I did it through my terminal on a Ubuntu computer, so that’s what I’ll be using in this article. You can do it from the console just as easily too. To use the console to, go to Compute Engine from the menu button and click on VM Instances. Be sure to set up your instance with a Broadwell CPU. Read the docs on setting it up here.

Anyway, you’ll have to install google-cloud-sdk on your local machine. The docs have a pretty good guide on setting it up for different OS’ so check it out.

Once you’re done, run this snippet on your terminal:

gcloud compute instances create jamsa-dl-fastai \    
--min-cpu-platform "Intel Broadwell" \
--machine-type n1-standard-1 --zone asia-east1-a \
--boot-disk-size 500GB --boot-disk-type=pd-ssd\
--accelerator type=nvidia-tesla-k80,count=1 \
--image-family ubuntu-1604-lts --image-project ubuntu-os-cloud \
--maintenance-policy TERMINATE --restart-on-failure \
--metadata startup-script='#!/bin/bash
echo "Checking for CUDA and installing."
# Check for CUDA and try to install.
if ! dpkg-query -W cuda-8-0; then
curl -O http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.44-1_amd64.deb
dpkg -i ./cuda-repo-ubuntu1604_8.0.44-1_amd64.deb
apt-get update
apt-get install cuda-8-0 -y
fi'

You can get the snippet here. “jamsa-dl-fastai” in the first line is the name of the instance, change it to whatever you fancy.

Getting into your Instance

Once you’re instance is set up, head to your GCP console >> Compute Engine >> VM Instances.

You’re new instance should be listed there, along with it’s IP.

Before you can go into the instance you’ll need to edit some settings. Click on the instance name and you’ll be brought to the instance settings.

Click edit at the top and scroll down to the Firewalls. Check both “Allow HTTP traffic” and “Allow HTTPS traffic”. This is so that you can connect to Jupyter Notebook which you’ll be using in the course.

Next add the tag “jupyter” without the double quotes into the Network tags. Save and go back.

Reserve a Static IP Address

Every time you spin up your instance, a new IP address will be assigned to it. This gets pretty annoying if you’re a a frequent user. A static external IP address is an external IP address that is reserved for your project until you decide to release it. Let’s go ahead and request for one. Go back to the main button, look for VPC Engine >> External IP addresses.

Click reserve a static address at the top. Name it, check IPv4, Regional, select your region and attach it to a project. It should look something like this:

Go ahead and reserve it.

Back at your VM Instance console. Check your instance and click on start at the top. Once it’s done starting up you can ssh into your instance from your local machine. Fire up your terminal and enter:

gcloud compute ssh username@instance_name

Make sure to change username to your GCP account username and instance_name that you chose for yourself.

So if your username is “i-is-smarts” and the instance is called “smarts-pants-land”, you’ll enter this:

gcloud compute ssh i-is-smarts@smarts-pants-land

Nvidia Check

Now that your’re in your instance, you’ve got a little bit more of housekeeping to do.

First run:

nividia-smi

If your GPU was set up properly during instance creation it should look like this:

If not, you can reinstall it with:

curl -O http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.44-1_amd64.deb      
sudo dpkg -i ./cuda-repo-ubuntu1604_8.0.44-1_amd64.deb
sudo apt-get update
sudo apt-get install cuda-8-0 -y

Fast.ai Course Material

Next you’ll be getting all the course files from fast.ai, enter:

wget https://raw.githubusercontent.com/fastai/courses/master/setup/install-gpu.sh

Make sure it’s all in one line, the command is too long for the Medium’s code snippet block.

The file you just got is a nifty little script that you can use to install all the dependencies like anaconda2, keras, configures your jupyter notebook and git clones the fast.ai lesson materials. Run it with:

sudo sh install-gpu.sh

The password for sudo on your instance is an empty field, unless you set one up already. Reboot your instance either through the GCP console or:

sudo reboot

Configuring Jupyter Notebook

Jupyter Notebook is the IDE you’ll be using throughout the course so we’ll need to configure it beforehand. It was installed earlier when you ran the install-gpu.sh script with anaconda. If it’s not go ahead and install anaconda again.

On your local machine (not the VM instance), add a firewall rule to allow access to port 8888, which is what you’ll be using for Jupyter:

export PROJECT="project_name"
export YOUR_IP="external_ip_of_your_local_machine"
gcloud compute --project "${PROJECT}" firewall-rules create "jupyter" --allow tcp:8888 --direction "INGRESS" --priority "1000" --network "default" --source-ranges "${YOUR_IP}" --target-tags "jupyter"

Be sure to edit “project_name” and “external_ip_of_your_local_machine” to the corresponding values. Make sure that “project_name” is the project and not instance name (I admit I mixed that up at first 😅).

Finishing Up

Once your instance has restarted, ssh into it again. Then start up Jupyter Notebook:

jupyter notebook --ip=0.0.0.0 --port=8888

EDIT: Some people have been getting the ERR_CONNECTION_TIMEOUT whilst trying to connect to your jupyter notebook. Make sure that you give yourself ownership of your .jupyter and your anaconda directories with:

sudo chown -R username:username .jupyter
sudo chown -R username:username anaconda2/

Where username is your GCP username that appears in your alias when you ssh into the instance (username@instance_name; so the part before the ‘@’).

END EDIT

You may be prompted to set up a password for Jupyter if it’s your first time. Do so if you wish to, you can always set it later. But for now you should see a token when you fire it up. Something like:

Remember it for now.

Go back to your GCP console >> VM Instances. Get the external IP of your instance.

To connect to Jupyter Notebook on your local machine’s browser, go to external_ip:8888 and you should be in. You’ll need to use the token you got earlier here to login if you didn’t set up a password.

IMPORTANT — ALWAYS, ALWAYS TURN OFF YOUR VM INSTANCE WHEN YOU ARE NOT USING IT. IF YOU DON’T YOU’LL CONTINUE BEING CHARGED ON AN HOURLY BASIS IT IS UP.

Congrats! You’re all down with setting up your GPU cloud instance on GCP. Kudos goes to Nok for beating me to the punch and writing an article about this first (I burned a whole day reading the GCP docs before finding that article), the folks at the fast.ai forums for all the resources and eshvk for coming up with a similar method too.

Alright folks, that’s it for now. If there’s any questions you’ve got you can ping me on twitter James Lee or drop me a response down here and I’ll get back to you ASAP.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Find this useful? Feel free to smash that clap and check out my other works. 😄

James Lee is an AI Research Fellow at Nurture.AI. A recent graduate from Monash University in Computer Science, he writes about interesting papers on Artificial Intelligence and Deep Learning. Find him on Twitter at @jamsawamsa.

--

--

James Lee

Future Tech. Ai, Blockchain and game design enthusiast. AI Research Fellow at Nurture.Ai & moderator of the FB group Awesome AI Papers