GitOps and k8s bootstrapping

Spiros Economakis
lenses.io
Published in
5 min readDec 16, 2020

--

In Lenses.io not only have our engineering teams adopted GitOps in our software delivery but we also provide rich features at the heart of our product to ensure developers building real-time applications on Kafka can adopt the best GitOps practices. You can read and watch more details in our talk at Kafka Summit here.

This post will explain how we adopted GitOps for bootstrapping Kubernetes clusters, the challenges we faced and how you can do the same using ArgoCD. Before jump in the topic let’s understad what is GitOps.

What is GitOps?

At its core, GitOps is code-based infrastructure and operational procedures that rely on Git as a source control system. It’s an evolution of Infrastructure as Code (IaC) and a DevOps best practice that leverages Git as the single source of truth, and control mechanism for creating, updating, and deleting system architecture. More simply, it is the practice of using Git pull requests to verify and automatically deploy system infrastructure modifications.

So, GitOps ensures in practice that a cloud system infrastructure or an application deployment is immediately reproducible based on the state of a Git repository with a declarative manner (eg. k8s manifets).

Push vs Pull

In the push approach, an external system (mostly CD pipelines) are triggering deployments to the cluster, after a git repository was committed to or a previous CI pipeline was executed successfully. In this approach, the pipeline system has access to the cluster.

The pull approach is based on the fact, that all changes are applied from inside the cluster. There is a controller inside the cluster, that regularly checks the associated git repositories and if a change occurs, the cluster state will be updated from inside. The name of it GitOps.

So if we do the comparison with a quick view in the push approach, an external system has access to the k8s cluster.

GitOps Challenges

As right now we are talking about GitOps and declarative configuration (k8s manifests, helm etc.) in a Git repository, the first problem we need to solve is how we can store the secrets safely. Remember, before with the external CI system there were ways where we were using Github Actions Secrets, Jenkins secrets plugin etc. etc.

Next challenge is how we can orchestrate the different dependencies among the applications we want to install or persistent volumes etc.. For example if I want to install Grafana, I will probably need an ingress so I can have a URL to access it later on (eg. Nginx). In this case the ingress should be installed first.

Based on the previous example, same should happen when I need a domain name registered in AWS Route53 with ExternalDNS for example.

Store Secrets Safely

The most secure way is to keep them in a Secret Management like Vault, AWS Secrets Manager, Azure KeyVault, Google Secret manager. But how can do this integration with Kubernetes Secrets and a declarative manifest?

There is the Kubernetes External Secrets project by GoDaddy which it gives us the ability to use external secret management systems and then securely add secrets in Kubernetes. In practice there is a custom resource defintion (CRD) which is called ExternalSecret. This declares how to fetch the secret data, while the controller converts all ExternalSecrets to Secrets. The conversion is completely transparent to Pods that can access Secrets normally.

The diagram below shows exactly the overall system architecture:

Keep in mind that Secrets are not encrypted at rest so to be more secure you need to use a KMS Plugin.

Orchestrate the different dependencies

The first dependency is how we can install ArgoCD in our k8s cluster after the provision of it.

For example in our team we are using extensively AWS with terraform to provision our infrastructure. After provisioning the infrastructure we have created a reusable terraform module to install or upgrade ArgoCD accordingly with Github Actions pull requests approval to apply the new state in our EKS cluster.

Bootstrap with ArgoCD app-of-apps technique

By the ArgoCD docs the definition is:

You can create an app that creates other apps, which in turn can create other apps. This allows you to declaratively manage a group of app that can be deployed and configured in concert.

So now let’s see what we need to bootstrap in our cluster and how we can achieve inter-dependencies:

  • Nginx Ingress
  • External DNS
  • External Secrets
  • Prometheus Stack
  • Amazon EFS CSI Driver

By the list above we can understand that the order of manifests synchronization should be the following as the diagram shows:

Amazingly, ArgoCD offers two different approaches to re-order manifests synchronization: phases and waves.

The idea with waves is that the k8s manifests syncronization/apply can happen under a certain order with adding an annotation to the ArgoCD application.

Let’s see now how we can leverage waves to install ExternalDNS and ExternalSecrets first and Nginx Ingress after and then everything else.

So now it’s time to see how the Nginx Ingress will be installed after these two:

As you can notice the whole difference in these manifests is the annotations in the k8s manifests where we define the ArgoCD sync-wave.

  • ExtrernalDNS & ExternalSecrets: argocd.argoproj.io/sync-wave: “-2”
  • Nginx Ingress: argocd.argoproj.io/sync-wave: “-1”

The ExtrernalDNS & ExternalSecrets will be installed first, as it has the lowest wave value. ArgoCD will wait for them to be deployed and healthy before proceeding to the next wave of Nginx Ingress which has the value -1.

Let’s see the final result after applying the state from ArgoCD with app-of-apps pattern:

Conclusion

  • k8s secrets (kubeconfig) is not exposed in external systems like CI
  • external-secrets gives us the power to use secrets manager instead storing in Git our credentials
  • ArgoCD is a powerful GitOps platform which can orchestrate the dependencies between different applications
  • app-of-apps pattern gives the ability to a group of apps that can be deployed and configured in concert

Now every member of our team can use only the git repository as the source of truth to make changes and see them automatically applied in the desired cluster. Also, this gives us the ability to rebuild the whole cluster if something goes wrong.

Stay tuned: More GitOps about feature branch deployments is coming

--

--

Spiros Economakis
lenses.io

Director of ProductOps @mattermost, Author of “Argo CD in practice” book