Panoptikum

How to make GitLab Kubernetes GKE deployments in a flexible way including 100% GitOps?

Well, gomplate }} and some ideas are all you need!

Helm and Kustomize are great tools for Kubernetes to create application packages and application deployments. But, there are situations or circumstances where already existing tools cannot help you. Every environment is different and so is every company or enterprise. Therefore default solutions or standard tools do hardly fit the specific needs you might have and the same is true for us. In this story, you will get information and ideas about a flexible and 100% GitOps related way you can adopt for you too.

Heart icon made by Freepik

What you need to achieve this goal is a tool called gomplate and some small scripting skills. The gomplate tool was invented by Dave Henderson and it is very useful in many different ways because mostly any tool needs some kind of configuration. These configurations are often static which means that there are no options to dynamically change the final configuration based upon environment variables or data-based information in general. And obviously that is true for Kubernetes yaml-files. I know that there is the kubectl edit command, but this is not what you want to use because it changes the running configuration without leaving any clue. If you are targeting a GitOps workflow, where every change must be Git committed, this will and should be a no-go!

Therefore we need some small gluing between the GitLab pipeline and the Kubernetes infrastructure. In this case, we are using Google GKE, but this will work with ever Kubernetes infrastructure because at last, we are not using more than kubectl apply -f commands. But what were the points why we are doing and why we want it this way?

Overengineering

Overengineering is a common problem today. Often a tool-chain is designed to fulfill a workflow. Something that my team and I learned in the last 20 years is, that if someone is talking about a chain of tools or a tool-chain something might be already over-engineered. You should step back and ask yourself if there is a more KISS like approach for solving the situation. That’s what we did. Why? Our first thinking about solving the GitOps driven deployment with Kubernetes was to use Helm. But after some time we figured out that for us Helm won’t be enough and we need Kustomize too. And after some more time, we recognized that we might need additional tools! A tool-chain was born.

But seriously, for what benefit? What all tools do in the end is to create yaml files that are finally applied with kubectl apply-f! That’s all, no magic here!

Our self-defined goals and guardrails for this approach were the following:

  • No over-engineering, avoid too many abstraction layers
  • Easy to understand for everyone who is using it
  • Easy to maintain centrally
  • Flexible enough to fulfill different deployment types now and in the future
  • 100% GitOps — no manual side-cheating
  • 100% reproducible and replicable

Therefore, we created a workflow where gomplate, a small Golang powered web-service, and some lines of Bash scripting is enough.

Step 1: The pipeline part

As written above, at the end you need a set of yaml-files which are applied with kubectl apply -f and therefore the first step is to generate them.

We are using a central template for our GitLab pipelines. This makes it easy to change the functionality within a central place and without changing every depending GitLab project which is using the .gitlab-ci.yml. The following picture shows everything that is needed to run GitLab deploy pipelines against a Kubernetes cluster from a GitLab project.

.gitlab-ci.yml file for Kubernetes deployment from a GitLab project

The KUBERNETES_DEPLOY variable inside the GitLab .gitlab-ci.yml controls what stages and jobs are running from the central GitLab CI template. In this case, the following “code” is running:

Central GitLab CI template excerpt covering KUBERNETES_DEPLOY variable

As you can see, the KUBERNETES_DEPLOY variable from the .gitlab-ci.yml is reflected in the rules section. When the stage: deploy runs, the kploy repository is pulled, which will to the rest. The kploy.sh script, which comes from the pulled repository, is started with the build Docker image tag as a parameter and that's it. As you can see, both, the central template repository and the kploy repository are public! There is no sensitive information inside the templates and therefore they can be public and they should be public because it makes the use of them easy. There is no need to authenticate against the GitLab repository and everyone can “see” what is going on there!

In addition, the kploy.sh script will take information from a folder that is called kploy-customize which includes some important files and is located inside the deploy repository, as seen in the next screenshot.

kploy-customize folder inside a deploy repository

Inside this folder, there are two important pieces of information. First, a file called data.yaml which is mandatory and includes information about the deployment and second, a folder called yaml .

kploy.sh related data.yaml and yaml folder

The data.yaml contains information that is later used to fill the kploy templates which appropriate values and also defines which template should be used for the deployment.

Content of data.yaml

We’ve defined some mandatory variables. The most important ones here are the kploy_deploy_template , the k8s_deployer_endpoint and the k8s_deployer_apikey. The labels, the namespace, and the other variables are depending on the kploy_deploy_template that is used for this deployment. In this case, we are using our own ingress controller based on the Kubernetes operator pattern. This system is very flexible because there can be many different deployment types with different settings which can be configured centrally.

The yaml folder can contain any yaml-file which should be applied during the kubectl apply -f run in addition. This allows the user of the system to inject yaml-files which might be needed by the software itself, like Kubernetes configmaps , secretes or anything else which is already supported by the Kubernetes API or will be supported in the future. There is no dependency on what Kubernetes version you are using or which additional components are installed in your Kubernetes cluster — maximum flexibility is guaranteed!

Step 2: The kploy repository

In the kploy repository, which is pulled in the script part of the .gitlab-ci.yml, are the templates located which are managed centrally. The templates are located inside the kploy/deploy_type folder

Content of the kploy repository

Inside the kploy/deploy_type folder, the selectable templates are located. As seen in the example above, we have used thehci-bosnd-3_medium-v1 template. This template contains environments like test, dev, staging, prod or whatever you like. The prod.yaml defines the numer of replicas, the image and the resource that are allowed for this type of template. The components for the prod environment are containing the Golang template based Kubernetes yaml-files. This files will then be processed by gomplate with the data given by the data.yaml and the data that is stored in the environment, in this case in the prod.yaml .

A template folder

As an example for a Kubernetes deployment.yaml we can have a look into the file 40-deployment.yaml:

Ok now, that we have all that we need, we will take a closer look into the kploy.sh script and the run-tpl.sh .

The kploy.sh script picks up the data.yaml and the run-tpl.sh and renders out the final run.sh . This is needed, because the run.sh is environment specific and therefore the run.sh needs to know what to data environment is used!

The run-tpl.sh will do the rendering of the template files which are in the specified template, in our case inside the hci-bosnd-3_medium-v1 (see above) folder. At first, it will render out the environment.yaml and based on this file, everything else is processed by gomplate , including custom yaml-files if some are existing.

In the end, the result is stored inside a tar-ball which is then delivered to the deployer service of the specific cluster. Everything, the source, and the result will be stored as GitLab-artifact and stored for 100 years. At any time, someone could take the yaml-files which are inside the GitLab-artifact-file and make a deployment with kubectl-apply -f! It is fully complete, nothing is missing and the GitOps idea is fulfilled!

Step 3: The deployer

The deployer is a simple service, which is running inside the Kubernetes cluster. It retrieves the upload (tar.ball)from the GitLab pipeline run extracts the content, does some security checks, and applies (if dry-run is false) the yaml-files with kubectl apply -f . We’ve established a system that uses a combination of Kubernetes-namepsaces and API-keys to verify if the deployment is permitted into the given namespace or not. But you can do whatever you like here. Our service is implemented as a simple Golang-based web service but you can do the same with some lines of Python-code or whatever you like too.

The result is given back to the calling pipeline via the http-response and is therefore visible in the output of the GitLab pipeline as you can see in the following screenshot.

What is happening inside the GitLab pipeline

The ordering of the yaml-files which are applied with thekubectl apply -f command is determined by the file-name numbering. Lower numbered files will be applied first — simple and comprehensible.

Bird Picture of the deployment process

Deployment process

In this picture, you see the whole deployment process steps including the information from above. It should give you an idea of how you can do something similar.

Takeaways

With the showed idea, it is possible to do highly flexible deployments with a lot of cool features Multi-Cluster, multi-repository, multi-branch, multi-template, and multi-environment support.

  • Full GitOps driven, everything is stored as artifacts
  • Just 80 lines of script code
  • Highly transparent
  • Everything can be used which your Kubernetes cluster supports, no limitation from a tool-chain and futureproof
  • Only depends on Bash and gomplate
  • Changeable and expandable at any time

Hopefully, you can benefit from this story and if you like, leave a comment!

Last edited on 22th September 2020

Written by

Speaker, GitLab Hero, Docker Community Leader, Cloud Native Citizen, OSS Evangelist, Author — The past is behind us, the future is ahead! n0r1sk.com/mario

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store