GumGum Tech Blog
Published in

GumGum Tech Blog

Streamlining your Kubernetes adoption with Helmfile / ArgoCD and GitOps

Photo of a playground
Photo Credits — Ferran Feixas https://unsplash.com/photos/jwkOaqUZtuM

In a previous article titled Stop being selfish ! — Open up Terraform to your team with Atlantis, we showcased how powerful and convenient it is to have a well defined Git workflow to manage the infrastructure as code using Terraform with Atlantis.

Using Git as a source of truth has really served us well over the past year and has inspired the GumGum Verity team to build a similar workflow, with Git being at the center to deal with Kubernetes application deployments.

In this blog post you will learn how we leveraged popular cloud-native technologies to build a reliable GitOps workflow to streamline and ease the adoption of Kubernetes within the team.

GitOps Principles

Pioneered in 2017, GitOps was first brought up by Weaveworks in this article. Here is a short quote extracted from it:

GitOps is a way to do Kubernetes cluster management and application delivery. GitOps works by using Git as a single source of truth for declarative infrastructure and applications. With GitOps, the use of software agents can alert on any divergence between Git with what’s running in a cluster, and if there’s a difference, Kubernetes reconcilers automatically update or rollback the cluster depending on the case. With Git at the center of your delivery pipelines, developers use familiar tools to make pull requests to accelerate and simplify both application deployments and operations tasks to Kubernetes.

This technique offers increased visibility on what is running on a given cluster as it is syncing the canonical desired system state (Kubernetes manifests) from a Git repository.

Before managing a cluster using GitOps workflows, the following must be in place:

[1] Describe the entire system declaratively

[2] Version the canonical desired system state in Git

[3] Automatically apply approved changes to the desired state

[4] Ensure correctness and alert on divergence with software agents

Let’s see in practice how you can achieve this with commonly used Kubernetes tools.

Kubernetes plumbing

Before we start, it is important to review the different actors involved with a Kubernetes cluster during its life time and especially how each of them contributes to a solution that inlines with the GitOps principles.

✅ Cluster infrastructure administration

Usually owned by a DevOps team, the cluster administrator responsibility is to deal with deployments and upgrades of Kubernetes clusters. Backed by a set of automations, the administrator ensures clusters can be provisioned / autoscaled / accessed by users / upgraded and destroyed in a timely manner.

Example of Kubernetes cluster setup with EKS for the control plane and Spot.io Ocean for worker nodes

This first layer of automation being purely infrastructure related is captured by our Terraform/Atlantis workflow which, if you noticed, follows similar principles to the GitOps ones:

  1. Infrastructure as code using Terraform/Terragrunt describes the system declaratively
  2. The canonical desired state (Terraform state) is kept in S3 thanks to Terraform and allows resources tracking over time
  3. Atlantis takes actions on your behalf based on the Git pull request activity

✅ Cluster shared services administration

Now that the Kubernetes cluster has been deployed, it’s time to set up a couple of key components that will help your application developers enjoy a seamless experience and adopt a true GitOps experience!

These components will be installed using Helm charts, a common packaging format for Kubernetes. A chart is simply a collection of files that describe a related set of Kubernetes resources. Here is a list of Helm charts we automatically deploy to our cluster to prepare the stage for our GitOps workflow:

  • 📁 Logging with kube-fluentd-operator: Logs forwarder based on Fluentd that can plug into your central logging platform.
  • 🚒 Monitoring with kube-prometheus-stack: Prometheus ecosystem should be available so that developers can register their applications and track metrics in Grafana.
  • 🔗 DNS with external-dns: Developers should have the ability to create DNSs from Kubernetes itself to setup services seamlessly.
  • 🔐 Secrets with sealed-secrets: Given that your application state will be stored in Git, you must ensure your secrets can be safely sealed in there.
  • 🤖 GitOps with argo-cd & argo-cd-applicationset: The GitOps agent must be deployed and configured to watch for its state from a Git source of truth.
Kubernetes Cluster enabling GitOps workflows with key shared services deployed

📚 It’s time to layout the code repository

Now that the required Helm charts have been identified, it’s time to stitch everything together in a code repository that will help achieve the first GitOps principle:

[1] Describe the entire system declaratively.

To ease the orchestration of all those charts, we use a thin wrapper on top of Helm called Helmfile. It is a declarative spec for deploying Helm charts.

Here is an example repository layout that can hold the infrastructure presented earlier and our future applications (located in gitops/) that will be deployed in a GitOps fashion. The root helmfile.yaml file includes all the releases listed in the ops/ folder. This way you can deploy all the components with a single command like:

Helmfile code repository structure

🐔 🥚 Solving the chicken and egg situation

You may notice here that we need to install all those ops/ Helm charts outside of the GitOps workflow to get the GitOps workflow setup 😅.

There are a couple of ways to solve this:

  • You can move all those shared helm charts into your Terraform module so that they get automatically deployed upon cluster creation (and they do not sit anymore in the helmfile repo presented above).
  • Install them manually once and make sure they get added to the GitOps workflow themselves.

[2] Version the canonical desired system state in Git

Now that the Helmfile code repository is setup, we need to create a second Git repository that will this time host the canonical state of our applications synced by the GitOps agent.

👍 As a rule of thumb, I would suggest you dedicate the canonical state Git repository to a single cluster (think about it as the brain of your cluster, what sits in that repo also sits in your Kubernetes cluster)

To inline with this second principle, Helmfile will simply hydrate all the templates of the Helm charts located in gitops/ to generate the Kubernetes manifests files.

The diagram below illustrates Helmfile template rendering for an application named demo with two different set of values for production and staging environment. It results in different releases rendered with different canonical state generations.

Helmfile template makes it easy to generate plain Kubernetes manifests for GitOps

Once the canonical states have been rendered on the file system, it’s only a matter of pushing them to Git so that your GitOps agent can access them.

[3] Automatically apply approved changes to the desired state

What’s great about relying on Git and source control is that you land on a well known interface that you use on a daily basis! Whether you want to control who can push to the repository or put restrictions on certain branches or event require reviewers approval before merge, you have the freedom to build the process that fits your day to day workflows.

This also provides room for automation using CI/CD pipelines to take actions on your behalf. In the example below, we followed our current software practices where staging deployments happen automatically when a PR is merged onto the main branch, and production is deployed based on a Git tag addition.

Helmfile templating step the can be integrated in a CI/CD pipeline

[4] Ensure correctness and alert on divergence with software agents

This is where Argo-CD enters the game! Once your Git canonical state repo is updated, Argo automatically reconciles your resources according to the new state:

Thanks to its beautiful UI, it really helps engineers get a better sense of what their application is made of, and if it is in sync with the state stored in Git. It also provides great features like showing a diff of the resources that changed, push button or automated syncs, and notifications.

ArgoCD UI showing an application that has gone out of sync with its canonical state

Stitching everything together

Now that the GitOps workflow is in place, it’s time to make things easy for developers so that they can work with their day to day business applications while seamlessly integrating with the GitOps pipeline previously configured.

In a docker world, a usual application code repository integrated with a CI/CD pipeline would run the following steps:

Typical CI/CD pipeline for an application repository (Build / Tests / Publish / Notify)

Given that Kubernetes application manifests are tracked in a dedicated Git repository (including the docker image version), we will need to bridge the gap between those two pipelines if we don’t want developers to manually change the version each time their pipeline produces a new docker image.

This problem is actually pretty easy to solve. The application code repository CI/CD pipeline will be extended to automatically bump up the image version tag according to the docker image tag that was was published and commit this change to the Helmfile configuration repo. By connecting the two pipelines through Git, you end connecting the dots! The picture below summarizes a cascading pipeline execution from application development to Kubernetes deployment:

Cascading execution of pipelines from application development to deployment

Wrapping up

Adopting GitOps principles using a well defined set of pipelines and workflows based on Git repositories has greatly helped our team get a better sense of how Kubernetes applications are made and deployed. This technique helped us reduce the friction and increased the ability for developers to quickly gain knowledge and confidence on Kubernetes.

To further reduce the potential amount of complexity introduced by the understanding of Helm charts, we made the call to rely as much as we can on the well known Monochart from Cloudposse. It offers a simple interface to deploy pretty much any kind of application in Kubernetes and has templates for CRDs. If you don’t know it yet, please go play with it, it can do a lot!

I hope you now have better sense of what GitOps in practice means and how easily it can integrate into your day to day workflows!

git commit -am 'Thank you for reading and happy GitOps !"; git push

We’re always looking for new talent! View jobs.

Follow us: Facebook | Twitter | Linkedin | Instagram

--

--

--

We’re hiring! Check out https://gumgum.com/engineering

Recommended from Medium

certainly

OpenBSD: Introduction to `execpromises` in the pledge()

Continuous Monitoring: How Kambr Keeps Clients Happy 24/7

How to build a Scala API following the best practices?

Working together

How To Be A Software Engineer

TOP TYPES OF HOSTING PLANS

Business team exploring web Hosting options

ZKSwap Sets Out Roadmap for 2021

The brief introduction to Feeds-v2.0

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Florian Dambrine

Florian Dambrine

Principal Engineer & DevOps enthusiast, building modern distributed Machine-Learning eco-systems @scale

More from Medium

Getting your Vault Secrets into Kubernetes

Demystifying Vault’s Secrets Management Solutions in Kubernetes

YAKDT: Yet Another Kubernetes Development Toolkit

Kubernetes CI/CD with GitHub, GitHub Actions and Argo CD