ArgoCD: The GitOps Way

Jon McLean
6 min readApr 3, 2023

--

Photo by Brett Jordan on Unsplash

For those who aren’t familiar with ArgoCD, it’s essentially a reconciliation engine for Kubernetes application state management. What does that mean?! — well, it will generate Kubernetes manifests from application metadata that is stored within Git. If configured for automation, it will automatically apply those manifests and prevent changes to those manifests that may have been performed outside of git (such as by the kubectl CLI or the ArgoCD UI)

We are getting a bit ahead of ourselves, so lets take a step back and define the problem with managing application state!

The Problem

Application release management is an issue that most companies decide to create their own solution for, thus creating a new Domain Specific Language (DSL) of sorts. If engineers are going to contribute to a release process, they must first learn the tool that manages the release, then learn the scripts utilized for integrating with that tool. Already, engineers are going to be turned off — learning a new thing and someone else’s consumption model for that thing…forget it.

The Solution

GitOps has emerged as an approach to managing releases without embedding the requirement to learn a DSL. Leveraging a declarative approach to managing an application’s state allows engineers to explicitly make changes, using a common tool/language — git! Not only do we break down the barrier of learning a new set of tool specific APIs, but we can also leverage the pull request process to guarantee multiple eyes have seen/approved changes. In essence, we have created a common place to have meaningful conversations between engineers with differing verticals, being software and systems — no more “throw it over the wall” approaches to releasing software!

The last missing piece to the solution is the “reconciliation engine” — which is ArgoCD! ArgoCD is not the only tool for reconciling application’s state with Kubernetes — another common player is FluxCD. This article will focus on ArgoCD, but its important for engineers to do their shopping around before settling on a single tool.

Opinion Time!

After spending years on ArgoCD, I’ve learned that lots of folks don’t use it in a purely GitOps way. Lots of companies decide to use the ArgoCD CLI to control when ArgoCD should perform a specific action, such as applying new Kubernetes manifests or rolling back to a specific point in time. If we “wait” to apply these manifests, how do we know what the state actually is without going to the Kubernetes cluster? Short answer, we don’t — which means we force users to either learn Kubernetes to determine application state, or ArgoCD. This diverges from the beauty mentioned before — which is leveraging Git as common place to have meaningful conversations between software and systems verticals! I think engineers are more antiquated to managing releases this way, and its hard to break patterns that we’ve used for years/decades.

The Time for Change is NOW…

Following from a previous article on how to create declarative repositories, lets layer in ArgoCD. Consider the below repository structure for our domain (notifications) that has two applications using Kustomize to generate Kubernetes manifests:

app-of-apps/dev/application.yml
app-of-apps/dev/kustomization.yml
stream-pub/dev/application.yml
stream-pub/dev/config-map.yml
stream-pub/dev/deployment.yml
stream-pub/dev/kustomization.yml
stream-pub/dev/service.yml
stream-sub/dev/application.yml
stream-sub/dev/config-map.yml
stream-sub/dev/deployment.yml
stream-sub/dev/kustomization.yml
stream-sub/dev/service.yml
CODEOWNERS

My application.yml for the stream-pub service could reflect the below:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: stream-pub
namespace: argo-cd
spec:
project: notifications
source:
repoURL: https://github.com/my-company/notifications.git
targetRevision: main
path: stream-pub/dev
destination:
server: https://kubernetes.default.svc
namespace: notifications
syncPolicy:
automated:
prune: true
selfHeal: true

There is nothing special about this application manifest, but the big callout is the syncPolicy with automated.prune and automated.selfHeal both set to true. The prune policy means that anytime we remove a kubernetes manifest from Kustomize, ArgoCD will clean it up automatically. The selfHeal policy means that ArgoCD will automatically enforce the application state as it was derived via git. Basically, if an engineers tries to tamper with an Kubernetes manifest, ArgoCD will almost immediately revert those changes to the state within git. With the above application manifest, ArgoCD will constantly poll git to fetch Kubernetes state and apply changes if detected. Webhooks can be configured to make this even snappier!

“How does the application manifest get applied” you may be asking; so we we leverage a pattern called Application of Applications (app of apps). The root application will manage ALL application manifests — thus any changes to child applications will be automatically applied. The root application basically never changes (unless there is an ongoing incident that requires some sort of stop the bleeding effort). The source for this manifest in the above example is within the app-of-apps directory. It’s kustomization.yml is essentially a pointer to all application manifests that should be manged.

# contents of /app-of-apps/kustomization.yml
resources:
- ../../stream-pub/application.yml
- ../../stream-sub/application.yml

To accomplish the above with Kustomize, you will need to leverage the --load-restrictor LoadRestrictionsNone flag to traverse files that do no share a common root.

Helm-isms

The app of apps pattern is absolutely critical for managing Helm applications. In order to effectively manage Helm applications with the patterns above, we need a way to supply values! These values can be set in a few ways (sorted from least to greatest priority):

  1. default values.yaml , which is deployed next to the chart
  2. some custom <env>.yaml which is deployed next to the chart
  3. a values file that is outside the chart’s locality (ie external git repo)
  4. an inline values reference, configured within the application manifest (ie helm.values )
  5. a specific value declaration, configured within the application manifest (ie helm.parameters[]

With the above knowledge, consider rolling out a new change. We could have a helm values reference for deployment.image . Our Application manifest could look like the below:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: stream-pub-helm
namespace: argo-cd
spec:
project: notifications
source:
repoURL: https://chartmuseum.my-company.com
targetRevision: 1.0.0-ga+gitSha
chart: stream-pub
helm:
parameters:
- name: deployment.image
value: 01234567890.dkr.ecr.us-east-1.amazon.aws.com/stream-pub:0c5c8a37
valueFiles:
- dev.yaml
destination:
server: https://kubernetes.default.svc
namespace: notifications
syncPolicy:
automated:
prune: true
selfHeal: true

In order for a change to the deployment.image value to be applied, the application manifest needs to be updated — this is why the app of apps pattern is so critical for Helm applications!

Managing Sensitive Configurations (ie Secrets)

We all know that secrets should NEVER be committed to git in clear text. This causes a bit of a kerfuffle with state management via git, right???

…wrong

We can employ a third party service, such as External Secrets, to fetch sensitive configs from a centralized location and create local Kubernetes secrets. We can now store the kind: ExternalSecret Kubernetes Custom Resource (CR) within an application’s folder. Following the example from above, we can do the below:

app-of-apps/application.yml
app-of-apps/kustomization.yml
stream-pub/application.yml
stream-pub/config-map.yml
stream-pub/deployment.yml
stream-pub/external-secret-db-password.yml # external secret manifest
stream-pub/kustomization.yml
stream-pub/service.yml
stream-sub/application.yml
stream-sub/config-map.yml
stream-sub/deployment.yml
stream-pub/external-secret-db-password.yml # external secret manifest
stream-sub/kustomization.yml
stream-sub/service.yml
CODEOWNERS

The External Secret CR is just a pointer to some value(s) that needs to be created locally for usage by an application. Now our deployment manifest can EASILY bring this value in as an environment variable leveraging envFrom.secretRef within the deployment manifest! More info here — https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#configure-all-key-value-pairs-in-a-secret-as-container-environment-variables

Wrapping Up

As engineers, we should constantly evolve not only technology, but also our process for managing such! There is some really cool tooling within the Kubernetes ecosystem to effectively manage a platform that is tailored to your specific business (sometimes too much tooling). With release management being a pillar of software delivery, we should always be willing to embrace change and GitOps is a massive disruption to our traditional way of working! Rather than trying to shape GitOps tooling to our traditional way of managing releases, we should lean into the principles of Gitops. Do yourself a favor if you’re looking to adopt GitOps and embrace it! Go Argo!!!

--

--

Jon McLean

YAML Janitor/DevOps Engineer with a love for systems that make sense, simplistically