Unleashing the Power of Scalability: Deploying Self-Managed Agents on K8s for Azure DevOps Pipelines

Moran Weissman
4 min readFeb 26, 2023

--

As a developer or part of a DevOps team, you may have encountered challenges when running pipelines on virtual machines or managed VMs provided by Microsoft Azure. These challenges can include the following:

  • Maintenance concerns, such as patching and upgrading, can be time-consuming and may cause downtime. Additionally, managing security for multiple VMs can be challenging.
  • Storage space is limited and may run out, especially when working with large datasets or multiple projects.
  • Reliability on local IT infrastructure, such as on-premises servers, can be limiting and may cause issues if the infrastructure goes down or needs maintenance.
  • Limited run-time for Microsoft-hosted agents, up to 60 minutes for private projects.

To overcome these challenges, I recommend using self-managed Azure DevOps agents running on Kubernetes.

What are Self-Managed Azure DevOps Agents?

Self-managed Azure DevOps agents are containers you can deploy and manage on a Kubernetes cluster. Using self-managed agents, you can use Kubernetes benefits, including auto-scaling, self-healing, and storage capacity. In this article, I’ll cover how to build, deploy, and use Azure DevOps agents as containers on the Kubernetes cluster using various services, including ArgoCD, Helm, KEDA, Kaniko, and External Secrets Operator (ESO).

Azure DevOps Agents on Kubernetes solution overview

Solution Overview

Using self-managed agents on Kubernetes, I could guarantee that my pipelines had the necessary resources to run seamlessly and efficiently. The ability to spin up new agents on demand and unlimited storage space allowed me to build and deploy docker images without delay.

Azure DevOps, as of today, does not have a native solution for running Agents on containers, they have provided a Dockerfile and a script that needs to be implemented in it, and that’s it. This solution tackles the challenge of running the Agents on Kubernetes, as well as:

  • Implementing GitOps for easier maintenance.
  • Deploying agents via ArgoCD
  • Using KEDA for scaling the Agents as needed by listening to the Agents pool queue
  • Providing the ability to build Docker images using Kaniko

And in general, giving a creative and scalable automation solution.

In addition, I will provide several different frameworks agents and a pipeline example for building a Docker image within the Agent using Kaniko.

I will give you the technical expertise needed to implement a similar solution in your environment and achieve the same results as I did.

Prerequisites

Before diving into the technical details, here are some prerequisites you’ll need to have in place:

  • An AWS account
  • Amazon Elastic Kubernetes Service (EKS)
  • ArgoCD installed
  • Azure DevOps

also, I highly recommend that you will have prior knowledge in Azure Pipelines, ArgoCD, Kubernetes, and Docker to understand and implement this solution.

Implementation Steps

To make this technical guide easier to navigate, I’ve broken it up into the following parts:

Part I: How I Built and Deployed Azure DevOps Agents to ECR: First, you’ll need to build the Docker images for the agents and push them to Amazon ECR. This is where your images will be stored and accessed by Kubernetes.

Part II: Automating Azure DevOps Agent Deployment to EKS with ArgoCD and GitOps: Once you’ve built the images, you’ll deploy them to Amazon EKS using ArgoCD. This will ensure that your agents run on the Kubernetes cluster and can be used in your pipelines.

Part III: Building Docker images in Kubernetes with Kaniko and Azure DevOps: Finally, you’ll build a Docker image using Kaniko. Kaniko is a tool that allows you to build Docker images in Kubernetes without needing privileged access to the host.

Disclaimer

While I recommend using the specific technologies and tools outlined in this article, it’s important to note that there are always alternatives. The same result can be achieved with other tools as well.

For example, Helm library charts are not mandatory, and you can use multiple Helm charts. However, if you have dozens of different agents, managing changes that will need to occur for all of them could be challenging.

Although I chose to use the External Secrets Operator (ESO) to fetch secrets, it’s important to note that ESO is not mandatory when working with Azure DevOps pipelines. While ESO is a well-accepted tool in the CNCF and recommended by ArgoCD, other options are available for fetching and storing secrets, such as sealed secrets or Secrets Store CSI Driver.

--

--

Moran Weissman

DevOps Tech Lead @ MSD Animal Health Technology Labs | GitOps 🚀 ArgoCD Enthusiast