Dzero Labs

What started off as a DevOps problem turned out to be an Ops problem.

Kubernetes-Native Build & Release Pipelines with Tekton and ArgoCD

--

Summer patio lights. Photo credit: Dzero Labs

How Did We Get Here?

In my previous tech post, I talked about how I wanted to create a Kubernetes-native build and release pipeline using Tekton and ArgoCD, and I walked you through how to install Tekton and ArgoCD on your Kubernetes cluster with Ambassador Edge Stack (with TLS) as your API Gateway.

Today, I will talk about the corresponding pipeline. In this post, I will:

  • Revisit the reference architecture from my previous post
  • Cover basic Tekton and ArgoCD terms and concepts
  • Walk you through the changes you need to make to the example code to run the pipeline on your own
  • Set up and run the example pipeline

Getting Started

Assumption: You have a Kubernetes cluster running on your favorite cloud provider, all set up with Ambassador Edge Stack, ArgoCD, and Tekton, per my instructions here. If not, please do that first, so that you can run the example code in this post.

Assuming you’ve got your cluster up and running, go ahead and clone the two GitHub repos that I set up for this example:

Once you’ve cloned them, I’ll need you to update some of the files, which I’ll guide you through a little later on. Make sure that you’ve set up two corresponding remote Git repos to push your code changes to, since, as GitOps tools, Tekton and ArgoCD rely on the remote Git repos.

Reference Architecture Revisited

As I mentioned in my previous post, my setup is based on the reference architecture below, which comes from here.

Reference Architecture. Source: ibm-cloud-architecture

I wanted to achieve 3 key things:

  1. Use Tekton for my Dev pipeline (build using Kaniko, deploy using ArgoCD)
  2. Use ArgoCD to deploy to my non-Dev clusters (e.g. QA, Prod)
  3. Use ArgoCD to deploy my Tekton pipeline

Tekton Primer

Before we start using Tekton, we should first cover some key concepts:

  • PipelineResources
  • Tasks
  • Pipelines & PipelineRuns
  • Triggers (we’re using TriggerTemplates, TriggerBindings, and EventListeners)

I’ll go over each of these at a high level. For more detailed info, check out the links to the Tekton docs in the References section at the end of this post.

PipelineResources

When you create a Kubernetes-centric CI/CD pipeline, at a minimum, you’ll want it to:

  • Build a Docker image from a Dockerfile in your remote Git repo
  • Publish it to a Docker registry somewhere

In Tekton, you define your remote Git repo and Docker registry as PipelineResources. Below is a sample PipelineResource definition:

Sample Tekton PipelineResource definition

You might be wondering how you authenticate your PipelineResources, and the answer to that is by using Kubernetes Secrets. Tekton supports different ways to authenticate. For our example, we’ll be using basic auth.

Here’s a sample Basic Auth Secrets defintion for a Git repo:

Sample Git repo Secret definition for Tekton basic auth

And here’s a sample Basic Auth Secrets definition for a Docker registry:

Sample Docker registry Secret definition for Tekton basic auth

A few important things to note regarding Tekton authentication:

  • You need to include the tekton.dev/docker-0 and tekton-dev/git-0 annotations in your Secrets definition to tell Tekton that the authentication is related to the Tekton PipelineResources you’ve defined.
  • The Secrets type (at least for basic auth) must be kubernetes.io/basic-auth. Check out the Tekton docs for other supported auth types.
  • If you define more than one Secret of a particular type (e.g. secrets for more than one Git repo) you’ll need to increment the # in the annotation (e.g. Use tekton.dev/git-1 to when defining the second Git secret).
  • The annotation must point to the URI of resource (e.g. Git repo or Docker registry).

Finally, you must associate your Secrets to a Kubernetes ServiceAccount, like this:

The ServiceAccount is how the PipelineRun gains access to the Secrets used to authenticate the PipelineResources. More on PipelineRun below.

Task

A Task defines a step or series of steps that you would like to execute. The steps are executed in the order in which you define them in your Task. Each step must reference a container image. The execution steps run in the container. It makes perfect sense, when you think about it. Since Tekton is Kubernetes-native, it means that the step needs a container in which to execute. The container you choose depends on what your step does. For example:

Tasks can reference values defined in standard Kubernetes ConfigMap and Secret objects, like this:

For convenience, I created some ConfigMaps to define some of the values used by our pipeline Tasks. ArgoCD server configs are defined in argocd-task-cm.yml, and Docker build configs are defined in build-task-cm.yml. More on that later.

I’ve defined two Tasks in our example Tekton pipeline:

1- Build task

NOTE: Kaniko does not play nice with older Docker schema versions, per this GitHub issue, so if you’re referencing an old-ass Docker image in your Dockerfile, Kaniko will fail with a very non-descriptive unsupported status code 404; body: 404 page not found error.

2- Deploy task

  • Uses ArgoCD to manage the application deployment (to a Kubernetes Dev cluster).
  • You can check out our deploy Task definition here.

In Tekton, you can execute Tasks either on their own, or as part of a Pipeline. In our example, we’re using Pipelines, so that we can string together the Tasks we want to execute.

The beauty of Tekton Tasks is that they can be reused by other Pipelines. More on Pipelines in the next section.

Pipeline

A Pipeline is a collection of Tasks that you want to run as part of your workflow. Each Task in a Pipeline is executed in a Kubernetes pod, which means that by default, they run in parallel. You can, however, specify the order in which your Tasks are run. For example, you can say that a task called deploy will only execute once a task called build is completed, by using runAfter, like this:

You can even get super-fancy with Finally tasks, which run regardless of whether or not your other Tasks succeeded.

Want to see a full Pipeline spec? Check out our definition here.

PipelineRun

Whereas a Pipeline specifies which Tasks to run and in which order to run them, a PipelineRun will actually execute those tasks in the order specified in the Pipeline definition.

Sample Tekton PipelineRun definition

The PipelineRun will also:

  • Provision PipelineResources required by Pipeline.
  • The PipelineRun gains access to the Secrets (the ones we defined to authenticate our Docker registry and Git repo) through its associated ServiceAccount (recall that we associated our two Secrets to our ServiceAccount).

Behind the scenes, a PipelineRun will actually generate a TaskRun (which, as you might correctly assume, is what actually executes a Task). The TaskRun then kicks off a Kubernetes Pod which runs the container specified in your Task’s step, along with the command/args/script you want to execute in that container:

How PipelineRun Works

Triggers (TriggerTemplates, TriggerBindings, EventListeners)

Triggers are a newer concept in Tekton, and they are really powerful. Before Triggers, you would have to kick off a pipeline manually, by running the PipelineRun YAML file using the kubectl command like this:

kubectl create -f <my_pipelineRun>.yml

But alas, we don’t need (or want) to do that, because we have Triggers!

Triggers allow us to define templates for our PipelineResources and PipelineRuns, and use Webhooks to trigger Tekton Events, which in turn kick off our Pipelines.

Check out the Tekton Trigger flow below from Tekton’s Trigger docs:

Tekton Trigger Flow. Source: tekton.dev/docs/triggers

I’ll provide a brief overview of key Triggers terminology below.

TriggerTemplates

  • TriggerTemplates act as a blueprint for creating resources
  • They can be used to create PipelineResources and PipelineRuns, as well as other resources
  • TriggerTemplates can be used to define parameters that can then be substituted anywhere within the resource template(s) being defined.
  • Example: A parameter called gitRevision defined within the TriggerTemplate body can be referenced in any of the TriggerTemplate’s resourcetemplates can be referenced as are then referenced with the prefix $(tt.params.gitRevision). Note that thett.params prefix is required.

TriggerBindings

  • TriggerBindings capture fields from an event (e.g. a Webhook, as in our case), and store them as parameters
  • These parameters can be referenced by our TriggerTemplate.

EventListeners

  • EventListeners listen in on events (e.g. a Webhook, as in our case).
  • When an EventListener is created, Tekton also creates a corresponding Kubernetes Service listening on port 8080.
  • If you name your EventListener as my-event-listener-el, the corresponding Service is called el-my-event-listener-el (note the el- prefix).
  • When you create a Webhook, you need to send your data to this Service. As a result, you need a way to expose the Service to the outside world, so that the Webhook can reach it.
  • Since we’re using Ambassador as our API Gateway, we will expose the Service via an Ambassador Mapping.

For a complete Triggers definition, including TriggerTemplate, TriggerBinding, EventListener, and Ambassador Mapping to expose the EventListener service, check out our example here.

ArgoCD Primer

Since ArgoCD is also part of our workflow, I’m going to give you a brief overview of some key terms and concepts used in our example.

Ability to Deploy to Multiple Clusters

The cool thing about ArgoCD is its ability to deploy to multiple Kubernetes clusters. This means that you don’t need to have ArgoCD installed in each cluster to which your app is being deployed.

Ideally, you’ll want to install ArgoCD (and Tekton too) on a completely separate Kubernetes cluster. One reason for doing this is that you don’t want to clutter your application clusters with unnecessary stuff. Imagine if you installed ArgoCD on your Dev cluster. If your Dev cluster went down for some reason, you wouldn’t be able to deploy to your QA and Prod clusters.

Another reason is that it helps keep cluster parity. Your Dev cluster should be set up the same as your QA cluster, which should be set up the same as your Prod cluster. Adding ArgoCD to one of your existing clusters takes away that parity and adds more moving parts and operational complexity.

Repo Registration

As a GitOps tool, ArgoCD is able to determine whether or not the application manifest you’ve deployed to your Kubernetes cluster matches up with the manifest that you’ve defined in version control. To do this, you must register your repo with ArgoCD, and associate that repo with your ArgoCD Application, using argocd repo add. More details on this command when we run our example later in this post.

argocd repo add <repo_url>

When a repo is registered with ArgoCD, it is added to a ConfigMap called argocd-cm, in the argocd namespace. It also creates KubernetesSecrets in the argocd namespace, for each of the repos added to argocd-cm. The Secrets are named repo-<some_identifier>.

NOTE: The argocd CLI should have been installed as part of your cluster setup.

Application

An Application is an ArgoCD custom resource which is responsible for orchestrating the deployment of your application manifest to the target Kubernetes cluster. ArgoCD can deploy Kubernetes manifests using Kustomize or Helm charts, plus support for some additional tools.

We’ll be using Kustomize for our example. In that case, all you need to do is specify the location of your kustomization.yml.

An Application is created using the command argocd app create. This creates an Application resource in argocd namespace, that looks something like this:

ArgoCD Sample Application Spec

More details on this command when we run our example later in this post.

In our example, we will have two ArgoCD Applications:

  • Tekton Pipeline “app”
  • 2048 game app

Application Sync & Health

When an ArgoCD Application is first created, its state is OutOfSync. This means that what’s in the Git repo that the ArgoCD Application is pointing to doesn’t match up with what’s in the Kubernetes cluster. This makes sense, because creating an ArgoCD Application does not automagically deploy it to the target cluster.

OutOfSync Tekton App

To deploy the app to the target cluster, you run argocd app sync. At that point, the manifest defined in your remote Git repo is in sync with the manifest deployed to your Kubernetes cluster. More details on this command when we run our example later in this post.

Sometimes an Application will deploy successfully and will be healthy (Pod starts up successfully), but it will show up as OutOfSync. This can happen if your application automagically creates Kubernetes resources, such as Pods. Tekton, for example, automagically creates Pods, which can cause ArgoCD to think that the app is out of sync. You can fix this by pruning the app when you sync it, either via the UI, or by adding --prune when running argocd app sync.

You can also add IgnoreExtraneous annotation to the resources that you want to exclude, as per the docs here. I haven’t had a chance to play around with using this annotation myself, though my guess is that to make this work, you would have add it to your Tekton Pod definition, and it so happens that you can create Pod templates in Tekton. (If you have a working example of this, please post a link in the comments!)

If ArgoCD successfully deploys an application to the target cluster (i.e. Pod has initialized successfully), the application will register as Healthy, and you’ll see a little green heart on your application dashboard:

Tekton App is Healthy and Synced

An ArgoCD app can be synced manually, or it can be triggered automagically via Webhook. I haven’t quite figured out how to get Webhooks working with ArgoCD, because I was testing originally on Azure DevOps Server, and I have a sneaking suspicion that ArgoCD doesn’t play nice with it, based on this GitHub Issue.

Tekton Pipeline Configuration (Almost There!)

While I have provided you with most of the code needed to get you going on the Tekton pipeline example, you’ll need to fill out your own details (i.e. repo URLs, credentials, etc.) before you can deploy and run the pipeline on your own system. I’ll guide you below.

Pipeline Repo Structure

I haven’t found any de-facto guide on how to structure Tekton pipeline definitions, but based on a bunch of GitHub repos that I’ve surveyed in the last little while, I settled on the structure below:

tekton-pipeline/
├── pipelines/
│ └── build-deploy-pipeline.yml
├── resources/
│ ├── secrets/
│ │ ├── argocd_secrets.env (added by user; gitignored)
│ │ ├── docker_secrets.env (added by user; gitignored)
│ │ └── git_app_secrets.env (added by user; gitignored)
│ ├── argocd-task-cm.yml
│ ├── build-task-cm.yml
│ ├── kustomization.yml
│ ├── namespace.yml
│ ├── pipeline-admin-role.yml
│ ├── secrets.yml
│ └── triggers-admin-role.yml
├── tasks/
│ ├── argocd-task.yml
│ └── build-task.yml
├── triggers/
│ └── build-deploy-trigger.yml
└── kustomization.yml
Tekton Pipeline Repo Structure

A few important notes before we begin…

First and foremost, I will start by saying that you should NEVER EVER EVER EVER store secrets in version control. The example Tekton Pipeline GitHub repo has a .gitignore that ignores any *_secrets.env file, so as long as you keep to that naming convention, you should be fine.

Second: Kubernetes Secrets aren’t the best way to manage secrets; however, I’m using them to keep things simple.

Third: You may have noticed above that I have two kustomization.yml files above. The one in the resources folder is used to create the pipeline Namespace and the Secrets. I use ArgoCD to deploy the Tekton pipeline, but I can’t include the secrets and namespace creation as part of it, because again, we shouldn’t store secrets in SCM. It’s a bit of a chicken-and-egg situation, unfortunately. Ideally, you’ll want to add secrets and namespace creation to some sort of automated bootstrapping code.

The second kustomization.yml is (the rest of) the pipeline’s manifest. This is what ArgoCD will use to deploy the pipeline to our Kubernetes cluster.

Okay…so back to business!

1- Create secrets

You will need to create the following secrets in the tekton-pipeline/resources/secrets folder:

a) argocd_secrets.env

ARGOCD_USERNAME=admin
ARGOCD_PASSWORD=<admin_password>

NOTE: You’ll want to set up single sign-on (SSO) on your ArgoCD cluster, with RBAC for your users. For the purpose of simplicity, I won’t be getting into that in this post.

b)docker_secrets.env

Must be a service account (or service principal in Azure) with access to your container registry. Enter the credentials in the docker_secrets.env below:

username=<service_principal_id>
password=<service_principal_password>

c) git_app_secrets.env

You’ll need to generate a personal access token (PAT), as per your Git provider. Check out the docs for your provider below:

Once you’ve generated the PAT, enter them in the git_app_secrets.env file:

username=<username>
password=<personal_access_token>

NOTE: Before y’all get up my butt about using a personal access token for Git repo authentication on a shared pipeline, I agree with you that using a service account is a waaaaay better way to go about this. For the purpose of simplicity, we’re using a PAT for this blog post.

2- Update ArgoCD Task ConfigMaps

Edit argocd-task-cm.yml, and replace the following ConfigMap values with your server-specific details:

3- Update resource URLs

Edit secrets.yml, and replace the Git URL and Docker registry URLs.

  • Replace <git_repo_url>, where with your Git repo’s URL. For example, https://github.com/d0-labs/tekton-pipeline-example-app
  • Replace <docker_registry_url> with your Docker Registry URL. For example, https://my-acr.azurecr.io for Azure, or https://gcr.io/my-gcr for gCloud.

4- Update Triggers

Edit build-deploy-trigger.yml, and replace the Docker registry name and Trigger Binding JSON.

  • Replace <docker_registry_name> with your Docker registry’s name. For example, my-acr.azurecr.io for Azure, or gcr.io/my-gcr for gCloud.
  • Replace <json_resource_repo_url_path> with the path to the JSON resource pointing to your Git repo URL. This will depend on your Git provider.

NOTE:

For GitHub, check out this sample JSON payload. In this case, you’ll replace <json_resource_repo_url_path> with repository.url. The full line in build-deploy-trigger.yml will look like this: value: $(body.repository.url)

For Azure DevOps Server, check out this sample JSON payload. In this case, you’ll replace <json_resource_repo_url_path> with resource.repository.remoteUrl. The full line in build-deploy-trigger.yml will look like this: value: $(body.resource.repository.remoteUrl)

Regardless of what your JSON payload structure is, you need to remember to always include the body prefix, otherwise, it won’t work. I left a placeholder for that already in the example pipeline code, for your convenience.

Create the Pipeline

Finally!!

As I mentioned earlier, we’re using ArgoCD to create the Tekton pipeline. As I also mentioned earlier, we need to create secrets as part of the pipeline, but we don’t want ArgoCD to do that part, because it would mean that the secrets would be in version control, which we don’t want. So we’ll first need to create our pipeline namespace and our secrets separately, and then use ArgoCD to create the rest of the pipeline. We’re using Kustomize to do this. Again, that’s why you see two kustomization.yml files in the repo:

Let’s get started.

Assumption: You have the argocd CLI installed, as per the setup instructions.

1- Register your k8s cluster with ArgoCD

Note that this is not necessary if you’re deploying your application to the same cluster in which ArgoCD is installed, which is totally okay for the purposes of this tutorial. In a real-life situation, however, in a real-life situation, you’ll definitely want to set up a dedicated ArgoCD cluster to deploy apps to your non-prod and prod clusters.

To register a different Kubernetes cluster with ArgoCD, first list all of your clusters:

argocd cluster add

The result can look something like this (obviously these are bogus values, but you get the drift):

Sample argocd cluster add output

If you have a cluster called my-nonprod-cluster, as in the example above, then you can add it to ArgoCD by running the following command:

argocd cluster add my-nonprod-cluster

2- Create your Tekton pipeline namespace and secrets

As per above, we’re using Kustomize to create the namespace and secrets, and ArgoCD to create the Tekton pipeline. From the root of your Tekton pipeline directory, run the following command:

kubectl apply -k tekton-pipeline/resources/.

This will create a namespace called tekton-argocd-example, and will create the following 3 secrets in that namespace:

  • ArgoCD secrets (argocd-env-secret)
  • Git repo secrets (basic-git-app-repo-user-pass)
  • Docker registry secrets, (basic-docker-user-pass)

3- Register the two repos with ArgoCD

For our example, we’re creating two ArgoCD Applications. One Application is our Tekton pipeline. The other is the application that we’re building and deploying (the 2048 game).

As a result, we need to register both repos with ArgoCD, like this:

export SCM_USERNAME=<git_repo_username>
export SCM_PAT=<git_repo_personal_access_token>
argocd repo add <pipeline_repo_url> --username $SCM_USERNAME --password $SCM_PATargocd repo add <app_repo_url> --username $SCM_USERNAME --password $SCM_PAT

You should be able to see the repos in the ArgoCD Admin UI.

ArgoCD Repository Registration

4- Create the ArgoCD pipeline Application

Now we can create the pipeline application:

argocd app create tekton-pipeline-app --repo <pipeline_repo_url> --path tekton-pipeline --dest-server https://kubernetes.default.svc --dest-namespace tekton-argocd-example

What we did:

  • We’ve registered the Tekton pipeline app with ArgoCD, and named it tekton-pipeline-app.
  • The app manifest resides in the repo <pipeline_repo_url>. We registered that repo with ArgoCD in Step 3.
  • Since we’re using Kustomize for deployment, it means that ArgoCD will look for kustomization.yml in the tekton-pipeline folder.
  • We specified a --dest-server value of https://kubernetes.default.svc, meaning that the app will be deployed on the same cluster as ArgoCD.
  • We’ve told ArgoCD to deploy the app to the tekton-argocd-example namespace, which we created above, as part of Step 2.

Once the application has been created, you’ll see something like this on the home screen of the ArgoCD admin UI:

ArgoCD tekton-pipeline-app after creation in ArgoCD Admin Dashboard

ArgoCD creates an Application resource for the tekton-pipeline-app in the argocd Kubernetes namespace on your ArgoCD cluster.

5- Create the ArgoCD app for the 2048 game

Now to create the 2048 game app:

argocd app create 2048-game-app --repo <app_repo> --path kustomize --dest-server https://another.thing.cluster.io:443 --dest-namespace game-2048 --sync-option CreateNamespace=true

What we did:

  • We’ve registered the 2048 game app with ArgoCD, and named it 2048-game-app
  • Note that it’s the same name that we defined in argocd-task-cm.yml.
  • The app manifest resides in the repo <app_repo_url>. We registered this repo with ArgoCD in Step 3.
  • Since we’re using Kustomize for deployment, it means that ArgoCD will look for kustomization.yml in the kustomize folder of the app repo.
  • In this example, we’re deploying this app to a different kubernetes cluster than the one in which ArgoCD is installed. We get the --dest-server value from running argocd cluster add. When we ran it in Step 1, the command returned a SERVER value of https://another.thing.cluster.io:443 for my cluster named my-nonprod-cluster. Obviously, it will be a different value for you. 😊 (Note: You don’t have to deploy to a different cluster for this example — you can specify https://kubernetes.default.svc.)
  • The application will be deployed to the game-2048 namespace. Because we set --sync-option to CreateNamespace=true, the namespace game-2048 is created automagically by ArgoCD if it doesn’t already exist on the target cluster.

Once the application has been created, you’ll see something like this on the home screen of the ArgoCD admin UI:

ArgoCD 2048-game-app after creation in ArgoCD Admin Dashboard

ArgoCD creates an Application resource for the 2048-game-app in the argocd Kubernetes namespace on your ArgoCD cluster.

6- Sync (deploy) the Tekton pipeline

Time to deploy the Tekton pipeline:

argocd app sync tekton-pipeline-app --prune

As per the ArgoCD primer above, --prune is used to remove any extraneous resources. For a first-time app sync, this will do nothing (nothing to prune). But once your Tekton pipeline starts running, you’ll notice right away that ArgoCD will report the tekton-pipeline-app to be OutOfSync. If you run the argocd app sync with the --prune option, it means that any old pods automagically created by Tekton will be nuked from your cluster by ArgoCD.

7- Create a Webhook for the Tekton Pipeline

We want our Tekton pipeline to be kicked off by a commit to master, on our 2048-game app repo, so we’ll need to create a Webhook for that repo.

The Webhook URL look something like this:

http://<cluster_url>/tekton-argocd-example-build-mapping/

**Don’t forget the trailing / in the URL, or else Ambassador will get mad.

In case you’re wondering where tekton-argocd-example-build-mapping comes from, it’s coming from the tekton-argocd-example-build-el-mapping definition in build-deploy-trigger.yml. That’s the Ambassador Mapping resource that we created so that we could expose the el-tekton-argocd-example-build-el Kubernetes Service. That Service was in turn created by the tekton-argocd-example-build-el Tekton EventListener, thereby making this Webhook possible.

Please refer to your Git provider’s Webhook documentation for more details on creating Webhooks.

8- Make a change to your app repo, and let ‘er rip!

Go ahead — change some code in the 2048-game app repo, commit the code, push to master, and see some magic happen.

NOTE: I’ve included screenshots below to give you an idea of what you’ll see in your cluster. k9s is my tool of choice for Kubernetes cluster administration.

The code change will trigger the Webhook, which will then kick off our Tekton pipeline.

Running the build task

The pipeline will build the 2048 game’s Dockerfile, and will publish it to your Docker registry using Kaniko.

Using Kaniko to build our Dockerfile and push the image to the Docker

Then, it will deploy the 2048 game app to your target Kubernetes cluster using ArgoCD.

Running the deploy task (using ArgoCD)

You can check the sample output from my own pipeline below, for taste of what to expect:

App deployment using ArgoCD

If all goes well, your 2048-game should deploy to your cluster, and you’ll see its app status as Healthy and Synced:

2048-game-app after ArgoCD sync via Tekton pipeline

You’ll also be able to reach the app on your web browser via the following URL:

https://<cluster_url>/2048-game/

**Don’t forget the trailing / in the URL, or else Ambassador will get mad.

Conclusion

Are you still with me? In that case, thanks for hanging around this long! I will now reward you with a cute picture of an alpaca. 🦙

Photo by Jp Valery on Unsplash

You’re a champ!! I know that it was a LOT to take in, but I feel that documentation for these things is super sparse, confusing, requires mind-reading, and the gotchas aren’t fully documented. My goal was for you to have a good overview of ArgoCD and Tekton, and to equip you with enough information to set up a meaningful workflow for Kubernetes-native CI/CD. If you walk away with a better understanding of these tools, and a working example to build on, then my work here is done!

If you find any errors in this tutorial, please let me know, so that I can fix them. Also, if you find any nuggets of info that might help others (like the secrets setup via a key vault, or the ArgoCD Webhooks setup), please post a link to your solution in the comments section!

Happy pipelining!

Next Up

Check out my next post in the ArgoCD series: Configuring SSO with Azure Active Directory on ArgoCD

References

Check out some useful references:

--

--

Dzero Labs
Dzero Labs

Published in Dzero Labs

What started off as a DevOps problem turned out to be an Ops problem.

Adriana Villela
Adriana Villela

Written by Adriana Villela

DevRel | OTel End User SIG Maintainer | {CNCF,HashiCorp} Ambassador | Podcaster | 🚫BS | Speaker | Boulderer | Computering 20+ years | Opinions my own 🇧🇷🇨🇦

Responses (3)