Microservices on GKE with Ingress and CI/CD
disclaimer : all my writings, opinions and thoughts are mine.
Introduction
This article belongs to the series of posts I am writing to learn GCP in order to get Google Cloud certifications and to help people willing to learn.
As mentioned in my previous post, practicing is the best way to learn Google Cloud.
Each of my new article will add more complexity to eventually end with a global, highly available and secure web application.
In my previous post, we’ve been through the steps to deploy the CI/CD infrastructure on Google Cloud for our GKE workload which was actually a very simple httpd Apache webserver with a simple webpage.
Now, it’s time to add some more complexity to our code by adding microservices to our webapp and we will use nodejs code samples !
Prerequisites
- Read my previous post
- Basic Kubernetes knowledge, if not then go through the official k8s documentation here
- Basic Nodejs knowledge, if not go through this article but nodejs lets you use serverside Javascript
Architecture Overview
The CI/CD infrastructure architecture remains the same than my previous article, but now the webapp architecture we want to deploy is the following :
- Google Kubernetes Engine (GKE) lets us manage and orchestrate our containers (pods).
I explained it more in depth in my previous article.
We want to expose to our users two microservices, for such we will deploy one Kubernetes Service per microservice.
Like in my previous article, we will use GKE in Autopilot mode to let GKE fully manage the cluster (scale of the nodes and pods, patches etc) and thus each service will always have the right number of nodes accordingly to the load on our microservice.
Regarding network we will use NodePort configuration to expose each of our service. If you want to know more about NodePort and k8s service exposure then you can through this article. - The GKE Ingress will route the traffic to each microservice according to the HTTP path. Ingress is a k8s object to manage external access to your k8s services.
There are multiple types of GKE Ingress you can create as explained in the official documentation.
In our case we will create an Ingress for External HTTP(S) Load Balancing. When you create this type of Ingress, GCP automatically creates for you an external HTTP Load Balancing with the paths rules directing to each of your microservices. - Cloud Load Balancing (CLB) is the Google Cloud managed HTTP load balancing service. This is a globally distributed managed service accessible through a single Virtual address IP (VIP) and not a single instance/appliance of software load balancer.
In GCP, when you want to expose your workload to the public Internet you need to use the external load balancing. In our case, our users will send HTTP requests to our GKE workload by hitting the public VIP address.
Monolithic vs microservices
Monolithic = one UI + one backend = the old way
You code your webapp backend in a single project, basically a folder where all of your code stands, then you build it and deploy on a single server. Of course, you can horizontally scale your deployment by adding more servers of as long as the load increase but each added server contains a whole copy of your webapp.
Microservices = one UI and several backends = the modern way
You code your webapp backend into several separate projects. Each project can group a set of business related functionalities : for example one microservice handling payment for an online e-commerce webapp.
You could be part of a company with several dev team and each dev team is responsible for coding one or more microservices.
Finally, you build and deploy your microservice into production where each microservice runs in separate servers or environments.
The below diagram copied from this article is worth reading if you are new to microservices :
Note that in both cases there is one single UI (frontend) : if you need several frontends for your webapp with for example React + Angular + AngularJS then microfrontends is a good fit. Microfrontends is not at all the scope of this article but :
- Microfrontends lets your frontends teams work on different fronts without impacting other components (scalability, availability…),
- Microfrontends lets you use different frameworks (if necessary)
- Microfrontends helps you in migration : for example migrating only a subset of microfrontends from AngularJS to Angular for testing, business, UX or any technical reasons
If you need more info on microfrontends I recommend this article.
Why microservices ?
- In old and legacy application architectures the code resides in a single repository or project containing all the application code including all functionalities of the webapp : this is monolithic.
This can work well if you locally develop a webapp and work alone on your laptop for a small dev project.
However, if you work with multiple developers and teams in which each team or dev is responsible only for a part of the code you do not need to have all the code on your dev project but only push/commit the code of the microservice your are in charge of . - Your monolithic webapp runs in production on servers. So you need to size your server type to handle the load for the entire code and functionalities and with a single environment.
With microservices, you can have a custom separate configuration and environment for each microservice. This also where containers come into the game. - You can horizontally scale your servers to face the load by adding more copies of your webapp server and load balance the traffic. However, you will pay for extra compute capacity because the scale could be only due to one single functionality of your webapp.
With microservices, you would efficiently scale by only scaling the greedy microservice and optimise scaling costs - A security breach in your monolithic webapp or server will affect all the app but with microservice the impact could only affect a portion of your webapp
- If your monolithic webapp server goes down, then all the app is down. However with microservices an incident does not necessarily affect all the microservices
- Updating a monolithic webapp already running in production can risk a global webapp outage whereas updating a microservices webapp would risk an outage only on the updated microservice
The code
It lives in three repositories in GitHub :
For microservice 1 and 2 the code structure is basically the same than my previous article :
GKE Ingress will create for you a Google Cloud Load Balancer but also the associated Health Checks with a route to check the URL path “/” for each microservice.
You need to customise each of these Health Checks because in our case the URL paths are “/microservice1” and “/microservice2”.
So make sure you have the following code in each deployment.yaml file, make sure to replace “microservice1” with your path :
For more infos on liveness prob you can read this article and the official documentation.
I also added a couple of files for our Nodejs sample webapp :
- index.js and package.json the code for our very simple nodejs webapp serving a simple webpage.
- Dockerfile is different because we need to build our image and install our node dependencies (NPM)
Regarding the ingressgke repository we only have two files :
In the cloudbuild.yaml file, I instruct Google Cloudbuild to execute the “kubectl apply” command :
The previous command takes the ingress.yaml file as an argument. It declares a Kubernetes object of kind “ingress” in order to describe the GKE ingress and the path rules we want in Google Kubernetes Engine :
It is very straight forward as you can read : in my case I chose to match exactly the path in the HTTP request.
However, your webapp may require not to use this pathType depending on the backend logic of your routes/endpoints : make sure to choose the right pathType for your webapp. You can find more infos on other possibilities in the official documentation here.
Hands-on !
- Create an Artifact Registery Repo from Cloudshell terminal and named “microservices”
gcloud artifacts repositories \
create microservices \
--repository-format=docker \
--location=us-central1 \
--description="Docker repository"
- Create 3 Github repos named “microservice1” , “microservice2” and “ingressgke”
- Create Cloud Build Trigger for your GitHub repo named “microservice1” by going in in Cloud Build console and hit create trigger in trigger tab :
- Fill the trigger name and go to source and hit “connect new repository”
- Select your GitHub repo named “microservice1”
- Hit continue then you will be asked to install the Cloud Build App in your own GitHub repo and you just have to follow the steps displayed on your screen. It’s pretty straightforward.
- Then select your repo named “microservice1” from the dropdown list and select “main” branch and hit “create”
- Do the exact same previous steps to create Cloud Build Trigger but for “microservice2” and also for “ingressgke” repositories
- We need to explicitly allow Cloudbuild to execute kubectl commands : Open GCP Cloudshell from GCP console and add the “container.developer” role to your Cloudbuild service account with the following command, be sure to replace by your own Project ID and Project Number :
gcloud projects add-iam-policy-binding YOUR_PROJECT_ID_HERE --member=serviceAccount:YOUR_PROJECT_NUMBER_HERE@cloudbuild.gserviceaccount.com --role=roles/container.developer
- Clone the following code into your local laptop and push it to your 3 remote Github repos previously created and now already connected to CloudBuild Triggers :
git clone https://github.com/samy-fadel/microservice1.git
git clone https://github.com/samy-fadel/microservice2.git
git clone https://github.com/samy-fadel/ingressgke.git
- Go to Cloud Build Dashboard to wait for the three running builds status to be green
- If everything is green then go to Cloud Deploy dashboard and wait for the deploy to be done for “microservice1” and “microservice2”.
Note that “ingressgke” is not in code deploy since it is a kubectl command instantly executed by Cloudbuild - Now you can browse to Google Kubernetes Engine console in the “Service&Ingress” section in “ingress” tab and click on each endpoint URL on the “frontend” column:
Conclusion and next steps
That’s it ! You have a single global virtual IP serving your regional webapp composed of two microservices through two different url paths.
Our webapp is now running in a single regio on multiple Google cloud zones with 2 microservices. You can add as many microservices as needed for your webapp.
What if our users are located all around the world ? In my next article, I will describe how to scale our webapp worldwide.
Join me on Linkedin here