Microservices & DevOps Experience in the Google Cloud Platform

Şeref Acet
Codable

--

Our company “OBSS” organized a coding competition internally and our team “shouse” took advantages of the cloud services for the solution. In this article, I’m going to share our microservices and DevOps experiences in the Google Cloud Platform. The following list shows the requirements and the corresponding GCP service we use.

GCP stands for “Google Cloud Platform”

Source Code Repository

Managing and sharing codebase is one of the vital things for team collaboration so that we have to specify our code repository in the first place. Our team is formed of 5 senior developers therefore the repository provider should give access to 5 users without any additional charge.

From technical perspective, Git has become a standard for version control and GitLab, GitHub, Bitbucket Cloud are three well-known cloud providers for Git.

  • GitLab doesn’t have any free-plan for small teams so we don’t choose that.
  • GitHub provides unlimited private repositories up to 3 users but this is not adequate for our team.
  • Bitbucket Cloud offers unlimited private repositories free up to 5 users. Besides that, OBSS is a platinum partner of Atlassian as well :) Joking aside, Bitbucket Cloud supplies whatever we need.

We created a code repository for each microservice. Every service is maintained by one developer. Thus, it doesn’t make any sense to use a mature branching model such as GitFlow for the time being.

Bitbucket Cloud

We have a chance to use Cloud Source Repositories without Bitbucket Cloud integration however we decided to use Google Cloud as a cloud platform after we started to develop services. Luckily, Google Cloud Source Repositories has already native integration with GitHub and BitBucket Cloud so that we don’t need to migrate any code from Bitbucket Cloud to Google Cloud Source Repositories.

As seen in the below screenshot, we just give references to Bitbucket repositories from Google Cloud Source Repositories.

Attached Bitbucket Repositories in the Google Cloud Source Repositories

User Authentication & Authorization

We have created a Google Cloud Platform Account and we need to give access to every team member. Basically, every team member needs to have required access for build, test, package, deploy and monitor their services in the Google Cloud Platform. For this reason, we created Hackathon Developer role which contains the following roles.

  • Cloud Build Service Account
  • Kubernetes Engine Admin
  • Logging Admin
  • Storage Admin
Hackathon Developer Role

Subsequently, we have created a separate IAM user per each team member and assign the “Hackathon Developer” role to them.

IAM stands for Identity & Access Management

IAM Users with Hackathon Developer Role

Continuous Integration

Continuous Integration is one of the preliminary practice from DevOps perspective. As I said in the previous parts, our team is composed of 5 senior developers, we need to work in a harmony. Every developer has to ensure that his changes doesn’t break any other services. We should run unit-tests, integration-tests, security tests for every commit and the tests should satisfy the test coverage threshold. In addition to that, we need to dockerize the services at the end of the flow. For meeting all of these requirements, we use Google Cloud Build.

In Google Cloud Build, it’s pretty straight-forward to attach the build triggers to any Bitbucket/GitHub repository. We just need to add trigger in the Build triggers section and attach them to the existing Git repositories.

All services have the same configuration which triggers the continuous integration flow when any push action happens to any branch.

Google Cloud Build Trigger

Cloud Build has built-in integration with Docker. You can specify your Docker build configuration while attaching a build trigger. When the build is completed without any error, docker image is constructed through Dockerfile. After that, you can see the built docker images in the Container Registry.

To illustrate, we suppose that there is a commit on master branch, Cloud Build task is started by build trigger and run the corresponding CI flow. If the build finishes successfully, it will register a docker image in the Google Container Registry automatically.

GCP Cloud Build

If you are executing non-docker build in Google Cloud Build you have to provide build configuration file (cloudbuild.yaml). This build config file contains instructions for Cloud Build to perform tasks based on your specifications. However, we didn’t need this option.

Container Registry & Continuous Delivery

Container Registry is a private repository to manage Docker images in the Google Cloud Platform. It let us store docker images of our services. Moreover, Container Registry is capable of performing vulnerability analysis of docker images.

You can deploy the particular docker image to the Google Kubernetes Engine with Deploy To GKE action as indicated by the screenshot below. In other words, Container Registry provides Continuous Delivery as well.

In our scenario, docker images of our services are generated by Cloud Build and they are pushed to container registry. We just need to click Deploy To GKE link in order to deploy our containers to the Google Kubernetes Engine.

Container Registry

Microservices Platform

Microservices architecture requires container scheduling, storage orchestration, load balancing, service discovery, networking, centralized logging, configuration & secret & environment management, auto-scaling, API gateway. Most of these abilities are provided by container technology. However we need a container orchestration software for managing the containers in the multiple servers around the world. Kubernetes is the most popular platform for these requirements. We will use Google Kubernetes Engine which is a managed Kubernetes service hosted by Google Cloud Platform. As is known, Google is the inventor of the Kubernetes so that using Google Kubernetes Engine would be a good experience for us.

Kubernetes Cluster
First of all, you must define the following settings in order to create kubernetes cluster in the Google Kubernetes Engine.

  • Cluster Name
  • Kubernetes Version
  • Number of Kubernetes Nodes
  • Instance type of the Kubernetes Node

There are some additional settings which enables the following features however we just enable HTTP Load Balancing & Stackdriver Logging for our infrastructure.

After you create the kubernetes cluster, the particular Virtual Machines are initiated in the background.

Kubernetes Nodes

As seen above, we have a “shouse” kubernetes cluster with 3 nodes and the total core is 6 vCPU. Each node has 7.5 GB RAM and the total RAM of the cluster is 22.50 GB.

You have two types of kubernetes cluster from deployment perspective, zonal and regional cluster. Running a multi-AZ cluster provides high-availability in different zones from single region. If you prefer multi-regional cluster, your cluster won’t be down even there is an outage on the entire region. For the sake of cost, we just go forward with multi-AZ clustering.

Google Cloud Shell
You can access to your kubernetes cluster via Google Cloud Shell. If you click Connect button appeared in the row belongs to the cluster, it will open terminal with adjusted kubernetes config. Then, you can run any kubectl commands. In other words, you don’t need to set kubectl configuration in your computer.

Kubernetes Cluster & Cloud Shell

Kubernetes Workloads
All containers are encapsulated by pods and the pods are managed by deployment resources in our kubernetes architecture. We created a deployment for every microservice. We disable pod autoscaler and set the pod count 1 for now. All these stuff can be monitored in Workloads section of GKE.

Workloads in GKE

Kubernetes Service
Kubernetes service resource is used for exposing microservices to the outside. We need to expose three of our services.

  • PostgreSQL
  • MSSQL
  • UI

If you select LoadBalancer type of kubernetes service resource in the Google Kubernetes Engine , it will allocate a new IP from Google Load Balancer. We bind the services to Google Load Balancer to get external IP. By the way, we are aware that exposing db is not a good practice but this environment is only for demo :)

Kubernetes Services

Persistent Volumes
When you use Persistent Volume Claims in the Google Kubernetes Engine, the corresponding persistent volumes are automatically provisioned. In other words, you don’t have to create these persistent volumes manually.

You can investigate your persistent volumes claims in the Storage section of Google Kubernetes Engine.

Persistent Volume Claims

Logging

Centralized Logging is one of the crucial practices in microservices. If the entire application formed of 5+ services, it is getting harder to trace the log individually.

If you are working with on-premise instances, ElasticSearch-Fluentd-Kibana may be a good choice for collecting, indexing, visualizing the logs. But if you are working in the Google Cloud Platform and you are unwilling to manage the logging stack on your own, Stackdriver Logging would be a good option.

Stackdriver Logging is the logging service of Google Cloud Platform and you can enable it while creating the kubernetes cluster. It can be applied easily with a checkbox in the cluster configuration page. When you enable it, any service deployed to Kubernetes Engine streams their stdout/stderr to Stackdriver Logging automatically. You are able to filter the logs with severity, time, cluster name, container name as well.

Stackdriver Logging

Summary

To sum up, I would like to revise the Google Cloud Platform services we use in order to manage the entire application lifecycle from development to deployment.

  • Google Cloud Source Repositories with Bitbucket Cloud integration helps us to conserve and distribute our Git repositories.
  • Google Cloud Build, in essence, provides Continuous Integration. It compiles the code-base, runs unit & integration & security tests, maintain code quality, builds the docker image and deploys it to the Google Container Registry.
  • Google Container Registry is responsible for storing our docker artifacts. Besides that, it provides Continuous Delivery with one-click. We are able to deploy our microservices to Kubernetes Engine with Google Container Registry without any hassle.
  • Google Kubernetes Engine is our microservices platform to run the services in the cloud. It supplies service discovery, load balancing, auto-scaling, high-availability, logging integration out-of-the-box.
  • Google Cloud Shell is used for connecting to our kubernetes cluster and VM’s.
  • Google Stackdriver Logging enables to trace the logs of several services from a single location. It supports to change the logging level and filters the logs accordingly.

--

--