Nikhil Koduri
May 22 · 5 min read

We recently concluded a very interesting and challenging consulting engagement for one our customers in North America. While we have automated several GKE deployments using Terraform in the past, this one was a bit extra interesting — because it involved deploying self managed kubernetes clusters on VMs, on GCE. Why self managed? For greater control, obviously. You can find more details below.

Limitations that we overcame with deploying Kubernetes on GCE

  • Flexibility to use custom images for deployments
  • Full control over the master node
  • Access to deep level elements of the cluster
  • Additional customization to Kubernetes cluster
  • Choice of Container Networking Interface(CNI)
  • Easy upgrade or downgrade of Kubernetes version

Well now, It is evident that the following question pops up when you read the title of this blog.

Isn’t it hard to configure kubernetes manually?”

But that’s where the might of Terraform swoops in to save the day. Terraform is an Infrastructure-as-Code powerhouse, that is declarative in nature and can deploy resources to nearly every cloud provider that exists. Hence we utilized terraform to automate our tasks. From creating the infrastructure to deploying kubernetes components, everything was orchestrated with Terraform.

Before diving into deploying this architecture, let us have a look into the features that it provides:

  • Horizontal pods autoscaling
  • Cluster Autoscaling
  • Calico CNI
  • GPU provisioning
  • YAML deployment from terraform
  • Prometheus

Let us begin.


  • Terraform — you can download and install terraform using this guide.
  • Go — Follow the following steps to install and setup go in your environment.
1) Install GO using the following commands:curl -O
tar -xvf go1.11.2.linux-amd64.tar.gz
sudo mv go /usr/local
2) Set Go’s root value, which tells Go where to look for its files:sudo nano ~/.profile3) Add the below environment variables at the bottom of the profile file:export GOPATH=$HOME/work
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin
4) Run the below command as well to refresh your profile:source ~/.profile5) Verify your installation:go version6) Export the GOPATH variable as the installation directory of GO:export GOPATH=$HOME/work
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin
  • GCP service account json for terraform to authenticate with GCP project.
Export the json to GOOGLE_APPLICATION_CREDENTIALS:export GOOGLE_APPLICATION_CREDENTIALS="/path/to/admin-sa-key.json"
  • K8S_manifest terraform provider
1) Get this provider on your local system by running below command:go get -u This will take significant time to get all dependencies required by go for terraform usage2) Once done create/append the ‘~/.terraformrc’ file with below lines:providers { k8s = "$GOPATH/bin/terraform-provider-k8s" }

Now we are all set!!! Let us initiate deploying the cluster:

  • Clone the above repository to get your hands on the terraform script.
  • Modify or add the required attributes in the terraform.tfvars file to customize things such as cluster name, project id, region of the cluster, etc.
  • Once you are all set with the attributes of the cluster, we can proceed with deploying the cluster.
1) Initialise terraform and download all the plugin:terraform init2) Carryout a terraform plan and check if all the resources are being created and have the right attributes on them:terraform plan3) Once everything is in place, just deploy it and let terraform do its magic:terraform apply

After it has been successfully applied you can check the gcloud console for the instances that have been created

You can use the following commands to check if all the nodes are joined and ready.

kubectl get nodes

Now, run the following command to view all the pods in action.

kubectl get pods --all-namespaces

Just for my amusement, I deleted the pods of Prometheus deployment.

pod deleted

And just as kubernetes should, kubernetes rises a new pod from the ashes.

The new pod is deployed

Now let us get a deep dive into the Cluster Autoscaler feature of this kubernetes deployment.

Cluster Autoscaler is a nifty tool that adjusts the size of the cluster according to the need of the kubernetes deployments. Cluster autoscaler scales up or scales down based on two scenarios:

  1. The pods are not able to schedule on the nodes due to insufficient resources.
  2. It scales down when the nodes have been underutilized for a long period of time and its nodes can be scheduled on other nodes
  3. Adding a similar node to the cluster would help.

Enough lets experience cluster autoscaler in action:

Initially, there is only one worker node

As we see currently there are two nodes that are present in the cluster, let us add some load onto the cluster so as to trigger autoscaling. Hence I would be creating 8 replicas of nginx deployment each requesting 500m of CPU.

apiVersion: apps/v1
kind: Deployment
name: nginx
app: nginx
replicas: 8
app: nginx
app: nginx
- image: nginx
name: nginx
- containerPort: 80
cpu: "500m"
One pod is pending

And just as the pods are not able to schedule due to insufficient CPU in the existing node. The cluster autoscaler scales the nodes and joins new nodes onto the cluster as you can see in the below screenshot.

Instance template is scaling from 0 to 1

Before you know it, the new node is up and ready in the cluster.

The new node has been added

And that all the pending pods get scheduled as well.

Voila!!! We have successfully deployed as well as scaled Kubernetes on GCE using terraform. Happy automation!

Searce Engineering

We identify better ways of doing things!

Nikhil Koduri

Written by

Searce Engineering

We identify better ways of doing things!

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade