Building ArgoCD Ecosystem With Secret Management — The GitOps Way (Part I)

Ron Golberg
Transmit Security Engineering
7 min readNov 8, 2022

Intro

So, You’ve decided that you want to use ArgoCD as your main GitOps continuous delivery tool for Kubernetes.

And you are already using Terraform in your organization for automating and managing your infrastructure.

But now you’re facing another challenge — how should I deploy it?

First we went to the official ArgoCD “getting started” doc, tried to install it using kubectl as mentioned there, once we saw that everything was working fine for initial testing purposes, we wanted to establish a stable bootstrap process.

Design

Our main goal was — achieve A-Z automated installation and configuration of ArgoCD and its essential resources for a developer to be able to start working with it, including secrets usage and management.

And this would be a good stage to mention some of the tools\infrastructure that we’re using:

  • AWS for hosting ArgoCD
  • Hashicorp Terraform through Terraform Cloud and env0 — IaC
  • Hashicorp Vault — secrets management
  • Atlassian Bitbucket — source control

As for our organization, ArgoCD would be used by ~10 development teams deploying across The 3 big public cloud vendors, AWS, GCP and Azure, with each team using their own development stack and their own environments with separate Kubernetes clusters, and multiple environment supporting separation between development lifecycle and production.

So these were our main options:

One Main ArgoCD controlling remote ArgoCD clusters for each team (one ArgoCD for all environments)

Pros

  • Developer experience is good
  • Agile per team requirements
  • Team isolation
  • Ease of management

Cons

  • Single point of access to all environments — security concerns
  • Non standard implementation

ArgoCD cluster per team per environment

Pros

  • Developer experience is good
  • Agile per team requirements
  • Team isolation

Cons

  • Management overhead
  • Cost

ArgoCD per team per environment — shared clusters namespace separated

Pros

  • Developer experience is good
  • Agile per team requirements
  • Team isolation

Cons

  • Management overhead
  • Implementation can be cumbersome (Argo namespace is needed) — same can be achieved with ArgoCD Projects instead
  • Point of failure — multiple teams use same cluster

Conclusion:

After discussing the pros and cons of the main options above (and others that we didn’t mention here) and in order to deliver the best and fastest value for our developers, we’ve decided to divide our project into two stages

  1. Delivery of an ArgoCD cluster for each team in each environment without consideration for high availability (2nd option)
  2. Aggregation of the Argo Clusters into a single cluster per environment with high availability, namespace separated (3rd option)

This way, every team will have full ownership and control over its own product deployment while being separate and secure with no influence over other products, providing the infrastructure solution on time - being relevant.

On this walk-through blog we’ll focus on the 1st stage.

Prerequisites

Assuming you are already using Terraform with modules that create all of your relevant EKS infrastructure, we’ll proceed with the prerequisites and hands-on steps:

The list of providers we are using and their versions:

required_providers {
null = {
source = "hashicorp/null"
version = ">= 3.0"
}
tls = {
source = "hashicorp/tls"
version = ">= 2.2"
}
kubectl = {
source = "gavinbunney/kubectl"
version = ">= 1.13.1"
}
helm = {
source = "hashicorp/helm"
version = "~> 2.5.1"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.11"
}
}

Create namespaces for ArgoCD installation and later monitoring infrastructure (optional if needed):

resource "kubernetes_namespace" "argocd_ns" {
metadata {
name = "argocd"
}
}

resource "kubernetes_namespace" "monitoring" {
metadata {
name = "monitoring"
}
}

Create a secret for ArgoCD admin:

resource "random_password" "argocd" {
length = 16
special = true
}

If you use Vault integration with terraform, it’s recommended that you upload the output to your Vault for a later use keeping your admin secret secure.

The output value can be reached using:

${random_password.grafana_password.result}

For BitBucket access you’ll also need to create an SSH keypair, upload the private key to Vault and add the public key to the relevant project to gain access to it, make sure that Terraform is accessible to this private key since we’ll need to upload it to ArgoCD later.

Now we’re ready to install and config ArgoCD

Installation

For the installation of Argo itself we’ll use Terraform helm_release resource.

The chart we’re going to use is the ArgoCD official community supported Helm chart with referral to relevant chart values in the values.yaml

with helm release looking like this:

resource "helm_release" "argocd" {
description = "Resource is responsible for installing ArgoCD, setting default Repos with authentication, setting default admin pass"
name = "argocd"
namespace = kubernetes_namespace.argocd_ns.id
repository = "https://argoproj.github.io/argo-helm"
chart = "argo-cd"
version = "4.10.5"
// We'll focus on values in later stage, you can remove it for first use
values = [
templatefile("${path.module}/argo-values.yaml", {
cluster_name = "${var.cluster_name}",
domain_name = "${var.domain_name}",
subnet_list = "${local.formatted_private_subnets}",
arn_acm_cert = "${var.arn_acm_cert}",
slack-token = "${var.slack-token}",
})
]
cleanup_on_fail = false
set {
name = "global.image.repository"
value = "quay.io/argoproj/argocd"
}
// Argo admin password
set {
name = "configs.secret.argocdServerAdminPassword"
value = "${bcrypt(random_password.argocd.result)}"
}
// Git repo we want to add in as default, must be allowed for the public key added in BitBucket
set {
name = "configs.repositories.infra.url"
value = "git@bitbucket.org:test/infra.git"
}
set {
name = "configs.credentialTemplates.infra.url"
value = "git@bitbucket.org:test/infra.git"
}
// Upload private SSH Key of the pair
set {
name = "configs.credentialTemplates.delivery.sshPrivateKey"
value = "${data.vault_generic_secret.repo_creds.data["ssh_private"]}"
}
wait = true
depends_on = [module.eks, kubernetes_secret.vault_secret, helm_release.prom_crds]
}

Now, after running the above deployment you should be able to get a fully working ArgoCD with Defined default repo and basic authentication with Admin user and password specified in random_password.argocd.result

But we want more, for us — it was having as many features as we can get right from the bootstrap.

Configuration

So we’ll head back to the values section in our helm release and spice it up with some of the built-in features (available here).

First you need to create a template file in your terraform module folder
then add all of the values that you want to add, if any parameters are needed from terraform, you can pass them through referencing them within the template file using ${cluster_name}

example:

values = [
templatefile("${path.module}/argo-values.yaml", {
cluster_name = "${var.cluster_name}",
domain_name = "${var.domain_name}",
subnet_list = "${local.formatted_private_subnets}",
arn_acm_cert = "${var.arn_acm_cert}",
slack-token = "${var.slack-token}",
})
]

Here are some of the suggested configurations Terraform template_file implementations:

Resource limitation:

server:
resources:
limits:
cpu: 700m
memory: 900Mi
requests:
cpu: 100m
memory: 256Mi

Prometheus ServiceMonitor support:

server:
metrics:
enabled: true
serviceMonitor:
enabled: true
interval: 300s
namespace: monitoring
additionalLabels:
release: monitoring-${cluster_name}

Remote execution in ArgoCD (UI Web based terminal):

server:
config:
exec.enabled: "true"

ArgoCD ingress configuration, using ALB:

Note: in order to make this work, you must have an external-dns chart installed, you can handle it before ArgoCD installation (another helm release) or deploy it as an ArgoCD application right after argo is up and running — making the ingress to wait for it.

Also, we’re using ACM for certificate management.

server:
service:
type: NodePort
ingress:
enabled: true
hosts:
- ${cluster_name}.${domain_name}
ingressClassName: alb
paths:
- /
pathType: ImplementationSpecific
extraPaths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: argocd-server
port:
number: 443
labels:
app: argocd-ingress
annotations:
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/healthcheck-path: "/healthz"
alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-2-Ext-2018-06
alb.ingress.kubernetes.io/certificate-arn: ${arn_acm_cert}
external-dns.alpha.kubernetes.io/hostname: ${cluster_name}.${domain_name}
alb.ingress.kubernetes.io/subnets: ${subnet_list}
https: true
tls:
- hosts:
- ${cluster_name}.${domain_name}
ingressGrpc:
enabled: true
isAWSALB: true

If you’re interested in notifications and Slack integrations, you can look at the example config here— Sorry but this one was too long for posting here.

In this post, you have learned how to do a basic installation and configuration of an ArgoCD cluster using Terraform and templates.

On our next blog, In order to deliver an environment ready for use by our developers, we would like to focus on Vault secrets integration with ArgoCD — so make sure you read the second part.

Also, stay tuned for the upcoming posts about the next implementation stages of this project — including namespace separated ArgoCD clusters, HA, SSO, ArgoCD ApplicationSet, App of Apps best practices and many many more to come.

--

--