From Monolith to Kubernetes Architecture — Part IV — GKE / GCP

Ori Berkovitch
6 min readMay 23, 2022

--

Photo by Kevin Folk on Unsplash

This is a multipart of a series Going From a Monolith App to Kubernetes.

Part I — Containerize

Part II — Dockerfile

Part III — Minikube

Part IV — GKE / GCP

In this part, we’ll elaborate on the effort of rolling an app into a Kubernetes service, specifically we’ll roll our app into Google’s GKE environment.

Google GKE

Google offers a pre-enabled Kubernetes environment where you can deploy and run your applications online. I always recommend to go with certain technologies at their “native” or “natural” environment, and Google - being the origin for Kubernetes, was a natural place to go for a Kubernetes deployment. If you are following this series, you know that we started our journey from a monolith app. We actually migrated from AWS to GCP in the process, and we’ve made a conscious decision to convert to Kubernetes while doing so — knowing that GCP is the best (and lowest risk) place to perform such a migration process.

GKE —Two Main Options

When creating a Kubernetes cluster, you are suggested 2 options: going with a “GKE Standard” cluster, or running a self-managed Kubernetes called “GKE Autopilot”. In our case, we decided to go with the Autopilot, since we trust Google to know better than us about managing the future pods, and to reduce DevOps expanses and maintenance.

GKE Create Cluster Options

The GKE Control Plane

The way you communicate with Kubernetes is through the control plane. The control plane is automatically generated for you, and all you have to do is actually tell it stuff like “create a service”, “create more pods of that thing”, “create a storage volume” etc. When you “trust” you control plane to do what you expect it to do, you will find that GKE Kubernetes is a great companion for the journey, and that the engineering process is eased by the service you’re getting.

To command the GKE cluster (or any other Kubernetes cluster for that matter), you simply use a command line tool “kubectl”, which in our case is enabled by the “gcloud” tool.

Google’s connection from your dev machine to the cloud is VERY robust and organized. You can feel immediately that this was “thought ahead” when Google started their cloud GCP project. Using gcloud, your dev machines becomes “cloud enabled”, and communication is almost seamlessly with the cloud services, among that — “kubectl” starts to work.

More on installing the gcloud tool (do this later, keep on reading for now)

GCP Projects

Google’s cloud services are divided into projects. When you start the process with GCP, I would recommend going with a single project, just to make things simpler. All the contents you create, and the expanses created accordingly, will be connected to this virtual placeholder called a “project”.

When you want to work with your GCP cloud account from your dev machine, you will need to “wire” your dev machine to that account. This is done by the gcloud tool, with the most basic command saying: “please connect my computer to the GCP account on project xyz”.

When the above gcloud command is executed correctly, any command you give to the cloud (or to your GKE cluster) will be at the context of the designated project.

Since connecting from the dev machine to the GCP account is such a common trait, we’ve created a short shell script that we execute on every shell we want to enable communication with the cloud.

Seamless connection to the cloud

In order to connect to the cloud, we’ve used a method called “bastion host”, which is basically — creating a small “gateway” instance, and having it act as the doorway from the dev machine to the cloud. For more details on setting up a bastion, see: https://cloud.google.com/solutions/connecting-securely

The Bastion Host Architecture

Remotely access a private cluster using a bastion host

Having a GKE cluster in place, and a bastion machine in place, we are ready to connect to the cluster.

There is actually an article with the same title: “Remotely access a private cluster using a bastion host” where you can see the details and the requirements for creating a connection. In our case, we’ve created a shell script to ease the burden of writing the connection command every time.

Here’s our script:

#!/bin/sh# kill the last session (if exists)
lsof -PiTCP -sTCP:LISTEN | grep 8888 | awk '{ print $2 }' | xargs kill
# kill the last proxy, if exists
unset HTTPS_PROXY
# login with gcloud (this opens a browser login)
gcloud auth login
# connect this dev machine with the cloud
gcloud beta compute ssh production-bastion --tunnel-through-iap --project=gcp-project-name --zone=europe-west3-a -- -4 -L8888:localhost:8888 -N -q -f
# hack
sleep 1
# connect to the autopilot cluster
gcloud container clusters get-credentials autopilot-cluster-name --region europe-west3 --internal-ip
# hack
sleep 1
# create a proxy
export HTTPS_PROXY=localhost:8888
# hack
sleep 1
# example usage
kubectl get ns

Sometimes, we find that AFTER the script runs, we still can’t connect to the cluster. We’re not sure of the reason, but running the “unset” and “export” commands again — makes the shell work perfectly as expected

Manually run those, one by one:

unset HTTPS_PROXYexport HTTPS_PROXY=localhost:8888

Interacting with GKE

From the point where the dev machine is connected to the project and the GKE cluster, the control is performed by simple kubectl commands. You create “Deployment” yaml files, “Configuration”, “Secrets” and “Service” yaml files, just like any Kubernets cluster manager — and the workloads are created on GCP GKE for you.

GKE Console

Notice how in the interface, you see both “Clusters” and “Workloads”. While maintaining a SINGLE cluster — you can have multiple services. Each Deployment, and others alike, are referred to as a “Workload”, and that’s were you’ll actually see your system parts running.

The Workloads and Clusters Menu Items

Container Repository

When creating the Kubernetes services, you’ll be required to mention the Docker image for the part of the system you’re trying to deploy. Luckily, GCP offers a docker image repository on the domain gcr.io, and the service itself is available under “Cloud Build” on GCP.

In our case, we’re maintaining the code on GitHub, so we’ve created an integration between the git repository and the Cloud Build tool. When the dev team commits to a designated branch — Google starts the build process for us, and the image is ready to roll on GKE. This is a basic CI/CD process.

Github integration with Google GCP Cloud Build

For more details see: Building repositories from GitHub

The Fun Begins 🥳

At the end of the process, we were delighted to have a running Kubernetes cluster, where in fact- boundaries on deployments and spawning of microservices are removed. When we had to have more compute power to process large ETLs, we simply updated the “replicas” parameter on the Kubernetes deployment file — and we were good to go. Need a new service, based on an existing Docker image? no problem! just create another “Service/Deployment” yaml and send it over to GKE.

It was a true relief and a true enablement of possibilities, not to mention the consolidation of services into a single secured cloud platform, namely GCP.

By the way, not only we’ve accomplished the microservices architecture — we’ve also gained first class citizen access to Google’s entire cloud infrastructure and services, which opened a whole new world of possibilities for the implementation and augmentation of our core business services.

One migration journey ended, and now another journey begins.

--

--