Host your application on Google Kubernetes Engine

Photo by True Agency on Unsplash
As a developer, it is always a daunting task to host your creation on a platform other than a Platform as a Service for example Heroku.

In this article I will take you through the process of deploying an application to an Infrastructure as a Service for example Google Kubernetes Engine and hope that it will remediate your fear of “lower-level” hosting services.


We will take the following steps to host our application:

  1. Kubernetes cluster creation
  2. Dockerizing the application
  3. Set up Kubernetes configuration
  4. Nginx installation and configuration
  5. DNS configuration
  6. Provisioning Secure Socket Layer (SSL) certificates

The Application

The application has a frontend built with React and backend (API built with Python Flask and Postgresql as the database).

React, Python flask and Postgres
This is basically an implementation of the 3 tier architecture; presentation layer (frontend), logic layer (API) and data layer (database) all separate and independent.
The application is a “bucketlist” application where a user can record there bucketlist items along with with action items on how they will achieve them.

The first step to hosting this application on GKE is to dockerize it. That is, dockerizing each of the 3 tiers of the application. Clone this repository to follow along.


The tools we shall use in this article include the following:

  1. kubectl
  2. gcloud
  3. docker
  4. google cloud project

Follow the in-screen prompts to get you set up with the above tools. This may involve logging into your google cloud account.

If you don’t already have them setup, go on and setup through those links. If you already have them setup, let’s match on.

Kubernetes cluster creation

Let’s start off by creating a kubernetes cluster and while at it configure all the required credentials to interact with the newly created cluster.

There are several ways to create a cluster in kubernetes; among those include using the google cloud console, command line (gcloud) or REST.

For this we shall use command line (gcloud). This allows us to work from our terminal which as a developer you have process working with. Go ahead and set these environment variables first.

I have put together all these environment variables in an file, on your terminal, you can run “source” to populate all of them at once.
  1. $PROJECT: this is the google project ID. The same project you created or want to use for this project.
  2. $CLUSTER: this is the name of the cluster you want to create.
  3. $ZONE: this is the zone in which the cluster will be created.
  4. $GCR_REGISTRY: this is the gcr registry you’re container images reside.
  5. $FRONTEND_DOCKER_IMAGE_NAME: this is the image name you wish your frontend container image to have.
  6. $DATABASE_DOCKER_IMAGE_NAME: this is the image name you wish your database container image to have.
  7. $BACKEND_API_DOCKER_IMAGE_NAME: this is the image name you wish the api container image to have.

Before we create the cluster, there’s one more piece of the puzzle we don’t have yet. A service account. head over here and create a service account and make sure to assign it these roles:

  1. roles/storage.admin as stated here
  2. roles/project.owner

After all those environment variables are set and service accounts and roles assigned, then you can run this command:

gcloud beta container --project “$PROJECT” clusters create “$CLUSTER” --zone “$ZONE”

This command will take a few minutes to complete. On completion, it will have also populated the credentials file as needed for you to interact with the newly created kubernetes cluster.

Now that we have a cluster, let the kubernetes games begin.

Dockerizing the application


I have set up the application in a way that each tier has its own folder. This makes it easier to organise each tier’s dockerfile.

On your terminal, change directory into the frontend folder. Since this application is under version control, make sure you checkout the branch you want to build the docker image for.

Just to be organized throughout the 3 tiers, I put all docker related files in the docker/ folder. So, while in the root of the frontend application tier folder, run this command

docker build -t $GCR_REGISTRY/$PROJECT/$FRONTEND_DOCKER_IMAGE_NAME:v1 -f docker/Dockerfile .

This will take a few minutes to run while it builds and configures the docker image for the frontend tier. When the build is done, go ahead and build the API and database docker images as well.

Still in your terminal, change directory into the api folder and run this command to build the backend API docker image.

docker build -t $GCR_REGISTRY/$PROJECT/$BACKEND_API_DOCKER_IMAGE_NAME:v1 -f docker/Dockerfile .

And finally let’s build for the database tier. On your terminal change directory into the database folder and run:

docker build -t $GCR_REGISTRY/$PROJECT/$DATABASE_DOCKER_IMAGE_NAME:v1 -f docker/Dockerfile .

With the above commands completing successfully, we shall have 3 docker images; Frontend, API and Database locally on your desktop/laptop. Go ahead and edit the deployments.yaml file in the k8s-configuration folder with the names of the newly created docker images.

For Google Kubernetes Engine to use these images, we have to make them accessible to it. One way of doing that is by pushing them to the Google Container Registry.

Google Container Registry, source:
This is Google Cloud Platform’s version of docker hub; google’s container image repository.

In your terminal, run:

gcloud auth configure-docker

This will configure docker to use google cloud's official docker image repositories, the likes of,,, so that to push a docker image to those repositories, you do it the same way as you do for docker.

Let's now push all the 3 images to Google Container Registry, simply run:


With that complete, let’s now address the elephant in the room; Deploying the application to Google Kubernetes Engine.


Kubernetes configuration folder structure

I put all the kubernetes configuration files in the k8s-configurations folder. In here we have the deployments, ingress, secrets and services configurations bundled in similarly named yaml files.

  1. deployments.yaml. A deployment is simply a “workload” that holds your applications pod (which contains a container) configurations. This configuration file contains the container image, ports, labels, replica count, etc.
  2. services.yaml. In kubernetes, services are the “glue” of the different “workloads”. These provide connectivity between containers in the kubernetes eco-system. This is because, the way kubernetes was created, a container, can not “talk” to another container if not in the same pod. Essentially, the frontend and API will never “talk” to each other directly if not in the same pod and for this setup, they are not in the same pod. That is why we have to use services.
  3. ingress.yaml. In kubernetes, ingress is a set of rules that direct external internet traffic to services in a kubernetes cluster. This is helpful if you have many services that need to expose deployments to the internet. With ingress, you only use one HTTP load balancer, and it will handle traffic to all those deployments that need the internet, unlike using a separate load balancer for each, which can be costly in terms of cloud computing costs.
  4. secrets.yaml. For our application, the secret we need is an SSL certificate to facilitate SSL termination. This will allow our application to be accessed securely through HTTPS.

Let’s now go ahead and turn the contents of the kubernetes configuration files into actual cloud computing resources.

Google’s best practice is that we create services first before deployments. So we shall apply the services.yaml configuration file first. Head over to the k8s-configuration folder and start turning code into actual computing resources.

On your terminal, while in the k8s-configuration folder, run:

kubectl create -f services.yaml

Output should look like this:

iterm output
This command will create every resource declared in the given services.yaml file.
This is the declarative way of creating kubernetes resources; declare all of them in a yaml file and pass that file using the -f flag to the kubectl create command and all the magic happens thereafter.

Do the same for secrets.yaml file, ingress.yaml file and the deployments.yaml file. Run:

kubectl create -f ingress.yaml && kubectl create -f deployments.yaml && kubectl create -f secrets.yaml

With that command completing successfully, head over to GCP console under kubernetes engine > workloads section and there you will find your resources you just created.

Google Kubernetes Engine dashboard

If you don’t see anything, make sure you have switched to the correct account you setup for this project, in the top right corner, and make sure you have selected the correct project in the top left corner as shown below.

google cloud platform console dashboard

Take a look at the workloads section, you should have the frontend, API and database up and running with green tick indicators.

GKE workloads dashboard

Head over to the services section too and make sure all services are up and healthy with green tick indicators.

GKE services dashboard

At this point everything is good to go, except for one thing. An ingress controller, without which we can not access any part of our application on google cloud through the browser.

An Ingress controller is responsible for fulfilling the Ingress, usually with a loadbalancer. Creating an ingress resource alone will not work without an ingress controller. Thankfully for us, and for helm, we do not have to get into the intricate details of setting up an ingress controller from zero.

Ingress controller configuration

To get an Ingress controller up and running in our cluster, we are going to use helm.

Helm is to Kubernetes what homebrew is to macOS, apt-get is to ubuntu and pip is to python. It is a package manager for Kubernetes.

Helm allows us to install dependencies and whole applications that have been bundled together into a package with just one command. Let’s start off by installing and configuring helm.

In your terminal, run:

curl -o && chmod +x && ./ && helm init

The command above fetches the latest version of the helm client and installs it on your local machine (laptop/desktop)

For helm to work with our cluster, it needs a server-side component installed in the cluster that helm will “talk” to in order to manage and pass commands to so as to create all resources we shall require, for this case, an Ingress controller.

This component is called Tiller. To install tiller in a Google Kubernetes cluster, we need to have a service account and cluster role binding configured for tiller. This will allow tiller to be able to install kubernetes resources inside our cluster.

In your terminal, run the following commands to install and configure tiller as required:

kubectl create serviceaccount --namespace kube-system tiller && kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller && kubectl patch deploy --namespace kube-system tiller-deploy -p ‘{“spec”:{“template”:{“spec”:{“serviceAccount”:”tiller”}}}}’ && helm init --service-account tiller --upgrade

To confirm the tiller installation, run:

kubectl get deployments -n kube-system | grep tiller-deploy

This will output your newly created tiller-dploy deployment.

We now have helm and tiller installed and configured on both our local work-station and remote cluster. We can now go ahead and install an Ingress controller into our cluster.

nginx ingress for GKE

There is a variety of Ingress controllers out there, for this, we shall use the nginx Ingress controller like we declared in our ingress.yaml configuration file. In your terminal run:

helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true

This will install the nginx Ingress controller. To confirm our installation, run:

kubectl get service nginx-ingress-controller

This will give you an output of the nginx controller service of type loadBalancer with an external IP.

This external IP takes a little while to get created, so if you see "pending", give it a few seconds/minutes for google to provision it for you.

At this point, you can access the frontend through the external IP of the nginx ingress controller service. But just like salt-bae, why be ordinary 😎. Let’s hook that IP up with a domain name so that we can access it through something like instead of

If you dont have one, you can buy one from the likes of godaddy, hover, google-domains, etc. Let’s now configure our domain.

DNS configuration


I have used my own domain name which I will take down a week after this demo article. Head over to your domain dashboard and set up some subdomain names for the frontend, api and database.

My domain name is and the subdomains I created look like so:
hover domain dashboard

In your terminal, run:

kubectl get services | grep LoadBalancer | awk '{print $4}'

Copy the output of that command; this is the external IP address through which our application is exposed to the internet.

In your domain dashboard, fill in this external IP address as the value for the 3 record sets we have created.

If you check our ingress.yaml file, you will notice we instructed our ingress controller through the ingress resource to force redirect any request to https using this annotation: "true"

This will allow only secure connections through to our application.

Allowing only secure connections to your application is a best practice especially if it works with user data. With that said, we need an SSL certificate to allow end-to-end encryption between our application and the user.


We shall use let’s encrypt since it offers free SSL certificates valid for 3 months. You can use any certificates if you have any, whether paid or free.

Provisioning the ssl cert

To acquire the SSL certificates from lets encrypt, we shall use a tool called certbot. Let’s install letsencrypt and certbot.

Change directory into the k8s-configuration folder, if using ubuntu linux, run:

apt-get install letsencrypt

If using macOS, run:

brew install certbot && pip install letsencrypt --user
Go here to find your OS installation steps.

Since we have 3 subdomains associated with the one domain we have, we shall create a wildcard SSL certificate. This will cater for all subdomain of the domain.

In your terminal, run:

certbot --manual --logs-dir certbot --config-dir certbot --work-dir certbot --preferred-challenges dns certonly --server --email your-email-address -d “*”
Replace “your-email-address” in the above command with your own email address.

Follow the on-screen prompts to provision your domain’s wildcard SSL certificate. When you reach the prompt that says:

Please deploy a DNS TXT record under the name with the following value: <value here>” should be your own domain name

Head over to your domain dashboard and create a “txt” record for and fill in the value given on your terminal by certbot.

NOTE: “” should be your own domain name
SSL TXT acme challenge
Give it a minute or two after creating that record

Go back to the terminal and press enter on your keyboard to continue with the Certificate provisioning process.

The reason you may have to wait is that at times it takes longer to propagate hence certbot will fail to provision and you will have to repeat the whole process afresh

When all is done, certbot will have created a “cerbot” folder in your current working directory. And the onscreen output will direct you to where your SSL certificate is along with the 2 import files we need, that is the cert.pem and privkey.pem files. While in the directory with the *.pem files, run:

cat cert.pem | base64 | pbcopy

This encodes cert.pem to base64 and copies the cert.pem encoded result output to the clipboard.

Head over to the secrets.yaml file and paste it into the value of tls.crt: <here>



cat privkey.pem | base64 | pbcopy

to encode the privkey.pem and paste the result as the value of tls.key:<here> in the secrets.yaml file.

The commands: “cat cert.pem | base64 | pbcopy” and “cat privkey.pem | base64 | pbcopy” base64 encode the privkey.pem and cert.pem files and copy the result at the same time. So you will have no output on your terminal but if you ctrl + v or cmd + v in a text editor, you will see the encoded result.

With the secrets.yaml populated as above, run:

kubectl create -f secrets.yaml

This will create a kubernetes resource of type secret which our ingress resource will use for SSL termination a.k.a secure browsing using HTTPS.

Photo by Mimi Thian on Unsplash

We can now spin up our web browsers and head over to to check out our application.

Notice how it auto redirects to HTTPS? That’s the magic that comes with using the nginx ingress controller. 💪

And there you have it. Finally hosted a 3 tier architectured application on Google Kubernetes Engine.

Go ahead and delete the resources you just created if you don't need them anymore as they will get billed by Google Cloud and yet not in use.

To delete the cluster and all resources created earlier, run:

gcloud container clusters delete $CLUSTER

Special shoutout 👏 to John, who put together the whole application.

Catch up on docker commands that will simplify your docker workflow real quick from this article I put together.

Feel free to suggest edits or additions to the knowledge shared here. You can also reach out for a chat through my Twitter or LinkedIn.