Tips for Deploying ArgoCD Declaratively with Terraform

Craig Bowers
Zencore Engineering
9 min readAug 10, 2023
Photo by Fons Heijnsbroek on Unsplash

ArgoCD is a popular GitOps tool that allows easy deployment of applications to one or many Kubernetes clusters. I’ve deployed ArgoCD on a few projects now and we’ve been using the same pattern. While Argo is a flexible tool and provides more features than I’ll be covering here, I’m going to focus on the foundational configurations that allow you to get Argo deployed rapidly.

This design is suited best for small teams that will be managing Argo. If you have a larger team that will be operators/users of Argo, there are additional configurations you need to consider like implementing Argo RBAC and SSO integration. Additionally, if you have a large fleet of ephemeral clusters other designs should be considered.

Deployment Method

99% of the time I advise against using Terraform to deploy an application, specifically using the HELM provider. There are better ways to deploy Kubernetes applications than using Terraform. If you’re reading this you probably know Argo is one of them :)

However, when it comes to deploying Argo, it’s actually a good use case for the HELM provider. One of the reasons being that you need Kubernetes cluster information for Argo to establish secure connections to your clusters. Assuming you’re provisioning your Kubernetes cluster with Terraform, that information will be easily available to you.

NOTE: Our implementations are on Google Cloud and we use Workload Identity for Argo to authenticate against clusters in different projects. Therefore, we do not need to provide any auth details into the cluster configuration. This may be required on other Cloud Providers.

Apart from access control the foundational configurations that need to be performed are clusters, projects, and root applications.

Kubernetes Clusters

Configuring the target Kubernetes clusters is one of the first things you should do. For each cluster you need to supply a name, a URL, and a cluster CA certificate for TLS connections. Each cluster added declaratively, or from Argo CLI for that matter, creates a secret in the Argo namespace. The cluster that hosts ArgoCD Server is referred to as the “in-cluster” cluster. We explicitly add this cluster declaratively for reasons I’ll explain in a bit.

We’ve created a Terraform module that uses a template file, cluster-config.tpl, to generate all cluster configurations. The cluster information is a list variable that looks like this and is passed into the module:

module "argocd" {
providers = {
kubernetes = kubernetes.alias
helm = helm.alias
}

source = "../../../modules/argocd"
namespace = var.argocd_namespace

clusters = [
{
name = "in-cluster"
url = "https://kubernetes.default.svc"
insecure = "false"
caData = ""
},
{
name = data.google_container_cluster.dev_cluster.name
url = "https://${data.google_container_cluster.dev_cluster.endpoint}"
insecure = "false"
caData = data.google_container_cluster.dev_cluster.master_auth[0].cluster_ca_certificate
},
{
name = data.google_container_cluster.stage_cluster.name
url = "https://${data.google_container_cluster.stage_cluster.endpoint}"
insecure = "false"
caData = data.google_container_cluster.stage_cluster.master_auth[0].cluster_ca_certificate
},
{
name = data.google_container_cluster.prod_cluster.name
url = "https://${data.google_container_cluster.prod_cluster.endpoint}"
insecure = "false"
caData = data.google_container_cluster.prod_cluster.master_auth[0].cluster_ca_certificate
},
]

In the module, the cluster-config.tpl template gets rendered and passed to the HELM chart as a values file:

resource "helm_release" "argo-cd" {
provider = helm

name = "argocd"
repository = "https://argoproj.github.io/argo-helm"
chart = "argo-cd"
version = "5.28.2"
namespace = var.namespace

values = [
templatefile("${path.module}/cluster-config.tpl", { clusters = var.clusters })
]
}

The cluster-config.tpl template dynamically generates a config for each cluster by iterating through each one.

configs:
clusterCredentials:
%{~ for cluster in clusters ~}
- name: ${cluster.name}
server: ${cluster.url}
}
config:
tlsClientConfig:
insecure: ${cluster.insecure}
%{~ if cluster.name != "in-cluster" ~}
caData: ${cluster.caData}
%{~ endif ~}
%{~ endfor ~}

You can view the Argo HELM chart values file directly for cluster configuration options.

Cluster Labels

Most of our Argo deployments are ApplicationSet’s and we use the Cluster Generator to identify target clusters for the deployment. As mentioned previously, the cluster configuration gets stored as a Kubernetes secret in the Argo namespace. Like any Kubernetes resource you can add labels and annotations.

Taking the cluster variable configuration one step further, we add cluster specific information in the form of labels to each cluster. For example adding an environment label or some indicator that the cluster is an external cluster.

We update the Terraform variable to include a new field called “env”.

clusters = [
...
{
name = data.google_container_cluster.dev_cluster.name
url = "https://${data.google_container_cluster.dev_cluster.endpoint}"
insecure = "false"
caData = data.google_container_cluster.dev_cluster.master_auth[0].cluster_ca_certificate
env = "dev"
},
...

The cluster-config.tpl template would then look like this (notice labels section):

configs:
clusterCredentials:
%{~ for cluster in clusters ~}
- name: ${cluster.name}
server: ${cluster.url}
labels: {
env: "${cluster.env}",
%{~ if cluster.name == "in-cluster" ~}
shared-cluster: "true",
%{~ endif ~}
%{~ if cluster.name != "in-cluster" ~}
managed-by-argocd: "true",
%{~ endif ~}
}
config:
tlsClientConfig:
insecure: ${cluster.insecure}
%{~ if cluster.name != "in-cluster" ~}
caData: ${cluster.caData}
%{~ endif ~}
%{~ endfor ~}

I mentioned earlier about explicitly including the “in-cluster” as part of the clusters configuration. We do that to deploy apps that are considered shared tooling. For us, GitLab Runners is a shared resource.

Once deployed the secrets look something like this:

apiVersion: v1
kind: Secret
metadata:
labels:
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: argocd
argocd.argoproj.io/secret-type: cluster
env: dev
managed-by-argocd: "true"
name: argocd-cluster-<cluster_name>
namespace: argocd
data:
config: <base64 encoded>
name: <base64 encoded>
server: <base64 encoded>
type: Opaque

Now that we have labels on the clusters, we can use them in different ways. First we’ll use the “managed-by-argocd” label to target external clusters, i.e. everything except the “in-cluster” cluster.

The generator section of the ApplicationSet would like like this:

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: my-app
namespace: argocd
spec:
generators:
- clusters:
selector:
matchLabels:
managed-by-argocd: "true"

For our shared tooling deployments, the matchLabels would be changed to:

  generators:
- clusters:
selector:
matchLabels:
shared-cluster: "true"

Next we’ll use the environment label to identify environment specific configurations, such as HELM repo release channels and values files.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: my-app
namespace: argocd
spec:
...
template:
metadata:
name: my-app
spec:
project: my-project
sources:
- repoURL: https://gitlab.com/api/v4/projects/12345678/packages/helm/{{metadata.labels.env}}
targetRevision: '>0.0.0'
chart: my-app
helm:
valueFiles:
- values-{{metadata.labels.env}}.yaml
destination:
server: '{{server}}'
namespace: my-namespace

We’ve been using GitLab and one great feature is HELM repo release channels. In the spec.template.spec.sources[0].repoURL parameter, we can dynamically set the HELM release channel based on the environment. This means when the app is deployed to the dev cluster, only HELM charts in the dev release channel get deployed. This allows us to set targetRevision to anything greater than ‘0.0.0’ and Argo will automatically deploy new charts. No need to update ApplicationSet for each release!

- repoURL: https://gitlab.com/api/v4/projects/12345678/packages/helm/{{metadata.labels.env}}

You can also see how we use the cluster label to dynamically target which values file should be included with this deployment in spec.template.spec.sources[0].helm.valueFiles[0]

helm:
valueFiles:
- values-{{metadata.labels.env}}.yaml

Adding cluster labels can provide some great flexibility into your Argo deployments. We love using them!

Projects and Root Applications

In Argo, projects create a logical separation of target Kubernetes clusters, source repositories (Git or HELM), Kubernetes resources (allow and deny lists), and Kubernetes namespaces. For each application that Argo deploys, the list just mentioned needs to be explicitly granted per project, without it your application will not get deployed.

A root application is deployed to follow the app-of-apps pattern. This means we deploy an Argo Application that points to a Git repo. In this Git repo is where you will store all your Argo Application and ApplicationSet manifests. By adding a new Application or ApplicationSet to Git, Argo automatically picks it up and deploys it.

Deployed as a separate HELM chart, the argo-apps chart allows us to configure our projects and root applications. Once again we use a Terraform template file, argo-apps.tpl, that gets passed into the argo-apps HELM chart as a values file.

In the Argo module we deploy the argo-apps chart as:

resource "helm_release" "argo-cd-apps" {
provider = helm

name = "argocd-apps"
repository = "https://argoproj.github.io/argo-helm"
chart = "argocd-apps"
version = "0.0.3"
namespace = var.namespace

values = [
templatefile("${path.module}/argo-apps.tpl", { clusters = var.clusters })
]

depends_on = [helm_release.argo-cd]
}

The argo-apps.tpl is a bit larger than the cluster config because there is more to configure for projects.

NOTE: the project namespace is always the namespace where Argo is deployed.

projects:
- name: my-project
namespace: argocd
clusterResourceWhitelist:
- group: '*'
kind: '*'
namespaceResourceWhitelist:
- group: '*'
kind: '*'
description: my awesome applications
sourceRepos:
- 'https://gitlab.com/my-company/my-app/argocd/app-of-apps.git'
- 'https://charts.jetstack.io' # for cert-manager
- 'https://kubernetes.github.io/ingress-nginx'
- 'https://charts.bitnami.com/bitnami' #for redis
- 'https://prometheus-community.github.io/helm-charts' # for kube-state-metrics
- 'https://gitlab.com/api/v4/projects/12345678/packages/helm/dev'
- 'https://gitlab.com/api/v4/projects/12345678/packages/helm/stage'
- 'https://gitlab.com/api/v4/projects/12345678/packages/helm/prod'
destinations:
- namespace: argocd
server: https://kubernetes.default.svc
name: in-cluster
- namespace: my-app
server: https://kubernetes.default.svc
name: in-cluster
%{~ for cluster in clusters ~}
%{~ if cluster.name != "in-cluster" ~}
- namespace: my-app
server: ${cluster.url}
name: ${cluster.name}
- namespace: my-app2
server: ${cluster.url}
name: ${cluster.name}
- namespace: my-app3
server: ${cluster.url}
name: ${cluster.name}
%{~ endif ~}
%{~ endfor ~}

The Whitelist parameters allow you to specify which Kubernetes resources you want to allow at the cluster level and namespace level. We allow everything since Argo is our means of Kubernetes deployments.

If you have the need to exclude resource deployments from developers, say an Ingress or Gateway API, you can add a blacklist using the “clusterResourceBlacklist” and “namespaceResourceBlacklist” parameters. You would then add another project config that allows these resources and would be managed by another team.

    clusterResourceWhitelist:
- group: '*'
kind: '*'
namespaceResourceWhitelist:
- group: '*'
kind: '*'

The sourceRepos section is where you explicitly whitelist the repos (Git and HELM) that this project is allowed to pull from. Most of these are public repositories.

    sourceRepos:
- 'https://gitlab.com/my-company/my-app/argocd/app-of-apps.git'
- 'https://charts.jetstack.io' # for cert-manager
- 'https://kubernetes.github.io/ingress-nginx'
- 'https://charts.bitnami.com/bitnami' #for redis
- 'https://prometheus-community.github.io/helm-charts' # for kube-state-metrics
- 'https://gitlab.com/api/v4/projects/12345678/packages/helm/dev'
- 'https://gitlab.com/api/v4/projects/12345678/packages/helm/stage'
- 'https://gitlab.com/api/v4/projects/12345678/packages/helm/prod'

NOTE: For private repositories, such as the Gitlab HELM repo or your app-of-apps repo, you need to explicitly configure that under configs.cm.repositories in the argo-cd HELM chart. In this configuration you will add the Kubernetes secret name that holds credentials to access the private repository. Not well documented, but these are the repositories that show up under settings → repositories in the Argo UI. You really only need to configure private repositories here.

configs:
cm:
repositories: |
- url: https://gitlab.com/api/v4/projects/12345678/packages/helm/dev
name: "Dev HELM charts"
type: helm
passwordSecret:
name: gitlab-repo-creds
key: password
usernameSecret:
name: gitlab-repo-creds
key: username

In the destination section we use similar logic as in the cluster config template to add our various namespaces to each cluster. If you have different workloads targeting different clusters, your configuration will look a bit different.

    destinations:
- namespace: argocd
server: https://kubernetes.default.svc
name: in-cluster
- namespace: my-app
server: https://kubernetes.default.svc
name: in-cluster
%{~ for cluster in clusters ~}
%{~ if cluster.name != "in-cluster" ~}
- namespace: my-app
server: ${cluster.url}
name: ${cluster.name}
- namespace: my-app2
server: ${cluster.url}
name: ${cluster.name}
- namespace: my-app3
server: ${cluster.url}
name: ${cluster.name}
%{~ endif ~}
%{~ endfor ~}

Now we can configure our root application(s). This is pretty straightforward. We specify the target project, source repo, repo path, and targetRevision. The repo path is not required, but comes in handy if you use a single repo to hold multiple root applications, each in their own path. It’s likely that your app-of-apps repo is private, and will need to be included in the configs.cm.repositories in the argo-cd HELM chart mentioned above to provide repo credentials.

applications:
- name: my-app-root
namespace: argocd
project: my-project
source:
repoURL: https://gitlab.com/my-corp/my-app/argocd/app-of-apps.git
path: some-path-in-the-repo
targetRevision: HEAD
destination:
namespace: my-namespace
name: in-cluster

NOTE: If you delete the root application from Argo, you will have to re-deploy the Terraform pipeline to re-install it. Other applications that are deleted will automatically get re-deployed by Argo.

External HELM Values files

A recent feature of ArgoCD is that you can now reference a HELM values file in a separate Git repo than where the chart itself lives. For example, when you deploy 3rd party charts from the internet and you want to supply your own values file to customize the deployment.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: third-party-app
namespace: argocd
spec:
...
template:
metadata:
name: third-party-app
spec:
project: my-project
sources:
- repoURL: https://helm.third-party-app.com/
targetRevision: '2.1.0'
chart: third-party-chart
helm:
valueFiles:
- $some-name/path/to/my-values.yaml
- repoURL: https://gitlab.com/my-corp/my-app/argocd/app-of-apps.git
targetRevision: HEAD
ref: some-name
destination:
server: '{{server}}'
namespace: my-namespae

We deploy the chart just as we do any other chart. The difference here is that we’ve added a 2nd repoURL and add a reference with any name, in this example “some-name”.

- repoURL: https://gitlab.com/my-corp/my-app/argocd/app-of-apps.git
targetRevision: HEAD
ref: some-name

Then for the 3rd party chart repo we can reference the other Git repo from our named reference, “some-name”, with the path to the values file in the external repo.

sources:
- repoURL: https://helm.third-party-app.com/
targetRevision: '2.1.0'
chart: third-party-chart
helm:
valueFiles:
- $some-name/path/to/my-values.yaml

We love using these tricks in our ArgoCD deployments. They provide some nice flexibility and integration with existing toolsets and workflows. Even if you deploy Argo in the per cluster pattern, these tips can still be useful. If you have other tips or tricks that you use in your ArgoCD deployment, I’d love to hear them!

--

--

Craig Bowers
Zencore Engineering

Cloud Automation enthusiast with a specialty in Kubernetes.