Suvodeep Pyne
Feb 13 · 7 min read

Do you want to move your app to Google Kubernetes Engine? Then read on.

Treat this like a getting-started guide. It will not go into details/concepts but rather help you get you started on GKE quickly.

The general outline of the steps are as follows:

  • Prepare your Docker image. Guess what? You should be able to run the same app engine docker image on GKE without any problems! (if you use the same service account). However, you might wanna consider a cleaner option.
  • Prepare and setup your Kubernetes cluster. Google sets up Kubernetes and all of its services on the default node pool.
  • Create a Kubernetes Deployment. In short, a deployment is a construct for a stateless application. For something which requires state, for example a MySQL instance, you would use a StatefulSet.
  • Expose your deployment with a service. If you are using a “LoadBalancer” service, then you are creating HTTP access. HTTPS requires a little more work which I will discuss later.
  • Setup metrics and monitoring using Stackdriver. This is key! One of the things that I was worried about was that GAE shows a ton of good charts which are not available in GKE directly. But you can do all of that, probably better with Stackdriver.

Google already has a documentation on the exact same topic: https://cloud.google.com/appengine/docs/flexible/python/run-flex-app-on-kubernetes. I recommend taking a look at that first. There are some nuances which I would like to highlight.

Anyway, let’s get started!

Preparing your Docker image

Assuming you are an app engine flex user, you should already have a docker container. The catch is that GKE will not read/use your app.yaml file. Thus, any configuration/settings/environment variables that you have set up needs to be moved to your DockerFile.

One important thing to note is that if you are using other Google services like Google Cloud SQL for example, you may need to export a credentials file with it. You can create one by following this doc.

# Credentials for GCloud Access
ENV GOOGLE_APPLICATION_CREDENTIALS=/path/to/json/key

You can run the command below to build and save your Docker image in Google Container Registry (GCR)

cd ~/path/to/my-awesome-project/publish-dir-which-has-Dockerfile
gcloud builds submit --tag "gcr.io/my-awesome-project/0.1.0"

Creating your Kubernetes Cluster

I am a fan of gcloud console and would be using it for the most part. You will need to install GCloud SDK and Kubernetes CLI (kubectl) to move along. If you have gcloud SDK, you can install kubectl simply by doing.

gcloud components install kubectl

A Kubernetes cluster can contain multiple node pools and can host multiple services. It is initialized with a ‘default’ node pool.

NOTE! You can’t change the machine type once the cluster is created. Neither can you change the boot disk size, image, service account, etc.

Also, if you are creating a single node cluster, please note that there are bunch of kubernetes services that will live on that node like kube-proxy, logging-service, etc which consumes up to a GB of RAM, so please allocate accordingly.

gcloud beta container \
--project "my-awesome-project" clusters create "kc1" \
--zone "asia-south1-c" \
--username "admin" \
--cluster-version "1.10.11-gke.1" \
--machine-type "n1-standard-1" \
--image-type "COS" \
--disk-type "pd-standard" \
--disk-size "30" \
--num-nodes "1" \
--enable-cloud-logging \
--enable-cloud-monitoring \
--no-enable-ip-alias \
--network "projects/my-awesome-project/global/networks/default" \
--subnetwork "projects/my-awesome-project/regions/asia-south1/subnetworks/default" \
--addons HorizontalPodAutoscaling,HttpLoadBalancing \
--no-enable-autoupgrade \
--enable-autorepair \
--maintenance-window "21:30"

This will setup your new Kubernetes cluster ‘kc1’ with a single node with standard configuration. If you want to change that, go through this doc: https://cloud.google.com/kubernetes-engine/docs/tutorials/migrating-node-pool

Before we move on, run this command to control your new cluster from the terminal

gcloud container clusters get-credentials kc1

Creating your Kubernetes Deployment

Once you have uploaded your Docker image to GCR, you can create a Kubernetes Deployment in a single command.

kubectl run my-deployment \
--image=gcr.io/my-awesome-project/0.1.0 \
--port=8080

This will run your application. However, there are a couple of configuration entries which are crucial in getting this right. You can edit your deployment using this command.

kubectl edit deployment realia-server-deployment

This opens up the deployment yaml in vi. Look out for the entries rollingUpdate and minReadySeconds. On saving, the settings are updated spontaneously.

spec:
...
# Please ensure that your server starts in 10 sec. Else the readiness check fails
minReadySeconds: 10
...
strategy:
rollingUpdate:
maxSurge: 1
# If you have just 1 instance to start, you need to set
# set this to zero to have a zero downtime rolling upgrade
maxUnavailable: 0
type: RollingUpdate

As you have newer versions of your app image, you can update your deployment using. This should be a zero downtime rolling update if the deployment config is right.

kubectl set image deployment/my-deployment my-deployment=gcr.io/my-awesome-project/0.2.0

Create a service

In order for your app to be visible to the outside world, you need to expose your app as a service.

You can use type ‘LoadBalancer’ to expose your service HTTP as is mentioned in the Google doc. Here I am going to outline the steps for HTTPS.

One way to achieve this is by creating a NodePort service. This blog is a good discussion on the available options. I am following the route described below.

Deployment > Service > Ingress > GCloud HTTP(S) Load Balancer
  • Expose the deployment via a service
  • Create an ingress to the service
  • Add an HTTPS frontend using GCloud HTTP(S) Load Balancer

In order to do this you need to first create a NodePort service using the following

kubectl expose deployment my-deployment --target-port=8080 --type=NodePort

Now create an ingress.

kubectl create -f gke/my-ingress.yaml

Here is a sample my-ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
backend:
serviceName: my-deployment
servicePort: 8080

And that’s it! This will automatically create a HTTP(S) load balancer with a default HTTP Frontend. You can view this on the web.

Google Cloud Console > Network Services > Load Balancing > Ingress

But wait! what about HTTPS? You can add multiple Frontends (IP and port) using the web interface (I haven’t done this via CLI).

Open your ingress and choose edit. You should see a screen like this

Load Balancer edit screen on Google Cloud Console

You should be able to add a new frontend and choose HTTPS as shown.

Upload your SSL cert. If you do not have one you can create one from the UI.

Don’t forget to change ephemeral to static IP and update DNS settings!

Note:

  • Please note that the new cert takes close to 30 min if not more to get deployed.
  • Even when the UI shows a bright green tick, the service may still show as unavailable.
  • Your browser caches SSL certs so try a different browser or close/reopen your browser and check.

Monitoring and Metrics

Volla! Your service is finally running and available on https! But wait. How do you know how’s it doing?

Well, for starters, you can always monitor traffic from LoadBalancer’s monitoring console.

Load Balancer Monitoring tab in Google Cloud Console > Network Services

However, just traffic stats isn’t enough for any application. That’s where stackdriver comes in.

In the stackdriver console,

  • Start by creating a dashboard
  • Here you can add charts easily by selecting a metric. For example, start with the resource type: Google Cloud HTTP(S) Load Balancer and metric: Backend request count
  • This would show you all the requests and their response code. You can group across dimensions like country, etc to get exactly the information you need
Stackdriver interface to edit metrics

The following metrics have helped me quite a lot in understanding the health of the app

  • External Request Count from GKE LB
  • Backend Request Count from GKE LB. (There are cases where these 2 may emit different response codes)
  • Total end to end latency
  • Backend latency
  • Container metrics: CPU, Memory, Disk, Network
  • Pod metrics: Memory, CPU, network per pod

At the end you may get something like this. :)

Stackdriver Dashboard Sample

And now you can finally sit, watch and relax with a cup of coffee!

To conclude; setting up Kubernetes can be a daunting task initially. But once you start getting the hang of it, it is a fantastic system to work with. I know I haven’t explained why I moved to GKE but that is probably a story for another day.

If you need a quick response, send a tweet to @suvodeeppyne, else feel free to leave comments below.

Google Cloud Platform - Community

A collection of technical articles published or curated by Google Cloud Platform Developer Advocates. The views expressed are those of the authors and don't necessarily reflect those of Google.

Suvodeep Pyne

Written by

Google Cloud Platform - Community

A collection of technical articles published or curated by Google Cloud Platform Developer Advocates. The views expressed are those of the authors and don't necessarily reflect those of Google.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade