Creating an Azure DevOps CI/CD Pipeline for your Kubernetes Microservice Application
(Though this guide is tailored towards microservice applications, many of the steps outlined — setting variables, writing the pipeline YAML files, and configuring the pipeline definitions on Azure DevOps — are applicable to other application architectures as well).
Microservices — an architectural approach under which applications are deployed as small, individual services with specialized functions — have rapidly grown in adoption over the past decade among organizations seeking to make their applications more scalable, resilient, and secure. These loosely-coupled components are typically packaged and deployed as individual containers and run on a container orchestration platform like Kubernetes on the cloud. The build, testing, and release processes for microservices will differ from those of monolithic architectures (for instance, deploying your microservices to a Kubernetes cluster as part of an end-to-end testing stage), and will thus impact the overall design of CI/CD (Continuous Integration and Continuous Delivery/Deployment) workflows.
In this article, we’ll go over how to use Azure DevOps to automate the build, testing, and deployment process for your microservice application. Azure DevOps provides a set of services for developers, project managers, and contributors to build and develop software. The specific service that we’ll be using is called Azure Pipelines, a CI/CD tool that automates the process of building and testing code projects.
We will configure the CI pipeline to run for pull requests against the main
branch as well as a post-merge job once the development branch has been merged into main
. This pipeline will build your application images with Docker CLI, tag them based on the git
commit corresponding to the pull request, and push them to a staging registry in a build
stage. These images will then be deployed and tested on a Kubernetes cluster in a test
stage. For the CD pipeline, we will still incorporate build
and test
stages as we did with the CI pipeline, but will also include a final stage to push the images to a production registry if the build and tests succeed.
This overview will cover several aspects of, and good practices for, building CI/CD pipelines with Azure Pipelines, including:
- Setting up triggers to activate pipeline runs against particular branches, and including and excluding specific paths in your repository from these triggers.
- Using stages and jobs to separate the building, testing, and production processes within your pipeline, and defining the conditions under which steps, jobs, and stages will run.
- Setting variables of various types — environment variables, global variables, parameters, UI-configurable variables, secret variables, library variables, Azure DevOps pre-defined variables, and so on. We’ll also cover how to use user-defined variables across stages.
- Tagging images for pull requests and production releases.
- Combining related steps and commands into reusable templates.
- Leveraging pre-defined Azure Pipeline tasks to download CLI packages on agent machines, and to publish and download pipeline artifacts.
The example used in this walkthrough is a basic Go application consisting of a client and server app. Each app is packaged as a separate container image and has its own respective Kubernetes deployment manifest. The server listens on port 8080
for incoming traffic, which we make internally accessible via a ClusterIP
Service. For our “end-to-end” test, we will simply ping one of the server API endpoints from the client with a curl
command and ensure we get an HTTP 200
response (though of course, most projects will have more elaborate end-to-end tests as part of their CI suite).
The source code, Docker images, and Kubernetes manifests for the applications, as well as YAMLs for the CI and CD pipelines, can be found here.
Step 1: Creating an Azure DevOps Organization and Project
If you haven’t already, you and your team will need to follow these steps to create an organization on Azure DevOps. The organizations should show up on the left-hand side of the screen:
Once you create your organization, you’ll need to create a new project by clicking on the “New project” button in the top right hand corner of the screen. Once the project has been created, it should show up under the “Projects” tab within your organization.
Step 2: Add the CI pipeline YAML file to your Code Repository
As we will see in subsequent steps, Azure Pipelines allows you to connect your project to an existing pipeline in your repository or create a starter YAML in your repository that you can build off of. Some options — such as “Deploy to Azure Kubernetes Service” — will automatically generate pipelines that push your container images to Azure Container Registry (ACR) and deploy them to an Azure Kubernetes Service (AKS) cluster. For this tutorial, we’ll connect the project to an existing pipeline in the repository, which will give us flexibility in terms the container registry and Kubernetes distribution that we want to push and deploy our images to as part of the building and testing processes.
One common practice is to have a .pipelines
directory in your repository to host all your Azure Pipelines YAML files. Let’s start by creating a .pipelines
directory, and then a ci-pipeline.yaml
file within it. In Step 6, we will then create the corresponding CI Pipeline in Azure DevOps to point to this YAML file.
The first thing we’ll need to do is specify the CI pipeline triggers. For instance, if we want our pipeline to run every time a new change is merged to main
(or another branch), we will need to define these under the trigger:
field in our YAML. We can also set the pipeline trigger to include or exclude specific directory paths in our repository. Likewise, if we want the pipeline to be automatically triggered for PR runs, then we will need to define these events under the pr:
field.
In this above example, we configured the CI pipeline to be triggered every time a change is pushed to or merged into main
, except for the updates to the docs
folder in the repository. The pipeline will also be triggered for pull requests against the main
branch, as well as release branches that precede with release-v
. As with the CI trigger, the PR trigger will exclude changes to the docs
directory. We’re not going to trigger the CI pipeline after changes are merged into the release branches or for new tags that are cut because the CD pipeline will be set to run for these scenarios instead.
You can also set the pipeline to run on a particular schedule if needed.
Step 3: Define Global Variables for the CI Pipeline
As we’ll see throughout this guide, variables in Azure Pipelines come in several types, encompass different scopes, and can be configured in multiple ways. The runtime expression syntax also differs between macro, template expression, or runtime variables. And as we’ll see in Step 7, secret variables (for instance, the password we use to login to the container registry) need to be referenced in a different manner than conventional variables.
In some cases, we may have variables that we want to use throughout the pipeline under the variables
field. For instance, our test
stage will entail creating a Kubernetes cluster, then applying different Deployment manifests to that cluster, and finally testing requests between the client
and server
Pods on the cluster, across several steps. Rather than setting the KUBECONFIG
environment variable to kubeconfig.json
in each Bash script that interacts with the cluster, we can instead just set KUBECONFIG: $(System.DefaultWorkingDirectory)/kubeconfig.json
at the top level of our pipeline. In Azure Pipelines, $(System.DefaultWorkingDirectory)
is a pre-defined system variable that represents the default working directory of the agent that the pipeline is running on.
We can also set the name of the artifact that we will be building and publishing in the build
stage with a variable calledartifact.name
. As we’ll see in the CD pipeline, this will also come in handy when we need to download the artifact and specify it’s name in the production
stage.
These two variables should be defined under the pr
triggers from the former step as follows:
If you want variables to be stage or job-scoped, you would set these under a variables
field under that particular stage:
or job:
.
Step 4: Define the Build Stage for the CI Pipeline
Now that we’ve set the triggers for the CI pipeline, let’s define the “build” stage to build our Docker images and push them to a container registry.
In Azure Pipelines, stages are boundaries used to demarcate separation of concerns. For example, in our CI pipeline, we will be separating the build
stage from the test
(or QA) stage. In the CD pipeline, we will have separate stages for build
, test
, and production
. Each stage consists of one or more jobs, and one stage will run after another. You can also specify stage dependencies and run conditions if needed.
Here, we’re defining a stage with the name build
. We also need to specify the agent that we want the jobs within the stage to run on. This is usually done by defining an agent pool from which an agent is selected and scheduled to run the stage. You can either set up your own agent pool for your organization or use a Microsoft-hosted agent from the Azure Pipelines agent pool. In this example, we will be following the latter approach. Finally, we’re creating a job within this stage called build_docker_images
.
Next, we will need to write a series of steps that will run under the build_docker_images
job. At a high level, this job will do the following: login to some container registry, build the images with a specified image tag, and push the images to that registry. For instance, in our example, build_docker_images
looks like this:
Don’t worry about all the variables like $(staging.registry)
and $(client.image.name)
— we’ll get to those soon.
Let’s look at some key components of this job. Notice that we have several ways of running steps in Azure Pipelines — tasks, bash scripts, and templates. Here is a rough break down of what each of these are:
- Task: A task is a set of steps in the form of a packaged script or procedure. Azure Pipelines provides several built-in tasks, such as the
DockerInstaller@0
task to install Docker CLI onto the agent, andPublishBuildArtifacts@1
to publish build artifacts to Azure Pipelines. You can also create your own custom tasks. - Template: A template is a series of logically grouped, reusable steps that we can share across jobs, stages, and pipelines. For instance, in our example, we use the
build-and-push-image.yaml
template (which I have saved in a/templates
directory) for each of the two images — client and server — that we will be building and pushing, and passing in the appropriate template parameters as needed. The template specifies all the parameters that it will reference and their respective types and default values. Templates can also invoke other templates, which is what we do in this example — after building the Docker images inbuild-and-push-images.yaml
, we call thepush-images.yaml
template, which lives in the same/templates
directory, and pass in the necessary parameters.
You can also use this Azure Pipelines task to build your Docker images.
(For building images for multiple architectures and/or processors, see my guide here on building and testing multi-arch images).
- Bash: The bash step in Azure pipelines runs a bash script with commands that you provide. You can pass in environment variables to each script using the
env
keyword.
On line 13 of the ci-pipeline-build-docker-job.yaml
above, you’ll notice that we set a variable called IMAGE_TAG
, which is the tag we will use when building the Docker images. We will set this tag as the first seven characters of the git
commit sha for the commit associated with the pull-request:
echo “##vso[task.setvariable variable=IMAGE_TAG;isOutput=true]$(git rev-parse — short HEAD)”
User-defined variables are usually job-scoped, and the command to set them usually looks something like this:
echo “##vso[task.setvariable variable=IMAGE_TAG]$(git rev-parse — short HEAD)”
However, because we will be using this variable in the test
stage, we need to specify isOutput=true
so that we can use IMAGE_TAG
in subsequent stages. We also need to define a name for this bash script, which in our case is setImageTag
, so that we can access the IMAGE_TAG
variable outside of build
. Note that if you’re exporting a variable in this manner, you will need to reference it as such: $(setImageTag.IMAGE_TAG)
for the rest of the job, as we did on lines 21 and 27 in ci-pipeline-build-docker-job.yaml
.
Finally, we run PublishBuildArtifacts@1
, a pre-defined Azure Pipeline task to generate a build artifact. This allows us to download the Docker images that we built in this step via the Azure DevOps UI, which we can inspect if something in our build
or test
stage goes wrong. Additionally, as we’ll see in the CD pipeline, we can also download this artifact in subsequent stages rather than re-building the same images when we push them to production. You can access and download pipeline artifacts when you examine pipeline runs on the Azure DevOps site:
Step 5: Define the Test Stage for the CI Pipeline
After building our Docker images and pushing them to staging, we need to test our application images by deploying them to a Kubernetes cluster and ensuring that service-to-service communication works as expected. We’ll be using kind
to create our Kubernetes cluster. If you want to test on another distribution, such as AKS, you will need to ensure that you have the appropriate CLI packages on your agent installed.
Note: Microsoft-hosted Azure Pipelines agents already have Docker CLI and
kind
installed by default. If you’re using self-hosted agents, you may need to use pre-defined tasks or create your own templates to install these and other CLI packages you may need.
For instance, you can use the following task to install Docker CLI onto your agent:
And this is an example of a template to install kind
:
Now, let’s take a look at what the test
stage for our pipeline looks like, and then walk through the main components of the stage and the test_microservice_app
job:
Once again, don’t worry about variables like $(client.image.name)
or $(server.endpoint)
, we’ll get to defining those in step 7.
One line 2, we set a dependency on the previous build
stage, meaning that the test
stage will only run if build
succeeds. Normally, because stages run sequentially by default (unless they’re configured to run in parallel), we wouldn’t need to explicitly define this dependency. However, since we reuse the IMAGE_TAG
variable that we defined in build
in the following line, we need to declare the stage as a dependency so it can be referenced when we define variables in test
: IMAGE_TAG: $[ stageDependencies.build.build_docker_images.outputs[‘setImageTag.IMAGE_TAG’] ]
.
Next, on line 10, we invoke the following template to create the kind
cluster:
The KUBECONFIG
environment variable that we set in step 3 allows to access this cluster via kubectl
throughout the pipeline.
Next, in lines 15–45, we create the client
Namespace and use kubectl apply
to create the Deployment for our client
Pod. In spec.template.spec.container
, we set the image as the client
image that we deployed to our staging registry in build
.
For your tests, you will most likely have a more extensive end-to-end testing suite. For instance, if your test suite is written in Go, you should check out the Azure Pipelines tasks that allow you to build and test Go projects.
Your test suite may either create the Deployments for the microservices within the tests themselves, or may expect them to have already been installed on a Kubernetes environment before running the tests. For the former approach, just make sure that you have a way of configuring the container images in the Deployment’s Pod spec to pull the images that we pushed to the staging registry in the previous stage — for instance, by specifying the container registry and tag for application images as a CLI flag when running your test suite. For the latter approach, you would deploy the applications like we do in this tutorial, and then run the test suite with a
go test
command in a bash script or by using the Azure Pipelines task to run Go tests.
We then need to ensure that the client
Deployment becomes available, which is the purpose of the wait-for-deployment.yaml
template, where we use kubectl wait
to wait for the deployment to become available within the specified wait_timeout
.
(The commented out section here is a reference to what you may need to do if you have a Deployment with multiple resources or replicas, due to past issues with
kubectl wait
for multiple resources.)
We then repeat these steps for the server
Deployment.
After ensuring that both the client
and server
Deployments have become available, we exec into the client
Pod and send a curl
request against the exposed API endpoint, and verify that we get an HTTP 200
response.
Finally, we delete the kind
cluster that we created and used for testing. Notice the condition: always()
line on line 116 of ci-pipeline-test-stage.yaml
. In Azure Pipelines, conditionals are used to specify when we want a particular stage, job, or step to run. In this case, since we want to delete our cluster regardless of whether the previous test succeeded or not, we set the condition to always()
.
We’re now done writing the YAML file for our CI workflow, which as a whole looks like this:
Now, we’re ready to hook the YAML file up to a Pipeline in the Azure DevOps project that we created in Step 1.
Step 6: Create the CI Pipeline Definition in Azure DevOps
Navigate back to your organization on the Azure DevOps portal, and click on the project you created. On the left-hand side of the screen, you should see a “Pipelines” tab, which should bring you to a page showing all of your pipeline builds. Click on “New pipeline” in the top right-hand corner of the screen. This should bring you to the following page:
Select the appropriate option based on the code repository in which your CI-pipeline YAML lives. Then, once you reach the “Configure your pipeline” step, you want to select “Existing Azure Pipelines YAML file,” and select the branch and path of your CI pipeline YAML within your repository.
Note that if your CI pipeline currently lives in a forked branch off of
main
and hasn’t been merged yet, you will need to set the branch to your forked branch instead. Once your PR gets merged after running and testing the CI pipeline in ADO, you will need to edit the CI pipeline trigger to point tomain
by configuring the pipeline settings. As we’ll see in the next step, you also need to ensure that secret variables such as$(staging.registry.password)
are made accessible to pipeline builds off of forked branches.
Now, for the final step, you will have a chance to review your pipeline YAML. In the top right corner, click the drop down menu next to “Run” and then click “Save” — we need to set some more pipeline variables before we are ready to give it a run.
Step 7: Define UI Configurable Variables for the CI Pipeline
After saving your new pipeline, you should be directed to a page showing the run history and analytics for the pipeline you just created. In the top right corner, click “Edit,” which will take you another page in which you can edit your pipeline YAML directly, set the pipeline’s variables, and change its triggers and settings. Click “Variables” in the top right corner.
In my case, the variables I need to set are the following — however, note that you may need to set different variables depending on how you deploy and test your application.
Notice that the last variable,$(staging.registry.password)
, is a secret variable, and the value is hidden. Setting a secret variable entails the same steps as as creating a normal variable — click the plus button in the top right corner, and set the variable’s name and value. Once you set the value, however, you need to check off the box that ways “keep this value secret.”
As described in the screenshot above, secret variables need to be explicitly referenced as environment variables in bash scripts. For instance, take a look at the bash script in which we login to the staging Docker registry:
Here, $(staging.registry)
can be referenced in the traditional format. However, $(staging.registry.password)
needs to be passed in as an environment variable, which in this case I named DOCKER_PASSWORD
in order to be referenced in this script.
If your pipeline doesn’t live in
main
yet, you will need to make secret variables accessible to builds of forks — otherwise, your build will fail when it reaches the step that references this secret variable.
You can make secret variables accessible to builds of forks by clicking the dropdown menu next to the “Run” button in the top right corner of the edit page, and then clicking “Triggers.”
Then, under “Pull request validation,” check off the box that says “Make secrets available to builds of forks:
Variables can be configured for individual runs, whenever you click “Run Pipeline,” if you check “Let users override this value value when running this pipeline.” However, if you want to change the value for a variable permanently and have this new value persist across all subsequent runs, or add additional variables to the pipeline, you will need to follow the same steps you took to set the variables initially — navigate to the “Edit pipeline” page from the pipeline build definition, click variables, and then set the values or add new variables accordingly.
You can also create variable groups under the “Library” tab that you can invoke in your pipeline.
Step 8: Name and Run the CI Pipeline
Navigate back to the “Pipelines” page, and click “All.” Your pipeline would have been assigned a default name based on the repo and branch of your project — in my case it was initially nshankar13.tutorials
. You can change this name by clicking the “More options” tab on the right side of the screen, and then clicking “Rename/Move.”
Once you set the name, click on the pipeline, which will bring you back to the page with “Runs,” “Branches,” and “Analytics.” Click “Run Pipeline in the top right corner, and ensure that the branch, variables, and stages are set correctly. Then hit “Run.”
You should then be directed to a page where you can view the build for your pipeline, and all the stages to be run:
You can click on these stages to see the jobs and individual steps within each stage. If everything succeeds, then both stages should have a green check mark next to them once they finish running.
If you haven’t done so already, you should also ensure that the triggers
you set in Step 2 — post-merge CI triggers, pull-request triggers, or both — work as expected by opening a pull-request against one of the branches you specified under branches
, and then merging the pull-request to that branch.
Step 9: Set Up the CD Pipeline
Writing the CD pipeline YAML and configuring the pipeline definition and variables on Azure DevOps will for the most part follow the same steps as the CI pipeline. However, there are a few differences between the CI and CD pipelines.
The CD pipeline will also have a build
stage that builds our Docker images and pushes them to a stagingregistry, as well as a test
stage to create Kubernetes deployments that pull these images and verify that communication between our services works as expected. However, we will also have a final production
stage that pushes our images to a production container registry:
When we set our pipeline variables in ADO, we will also need to add our login credentials for the prod registry in addition to the ones we had for staging:
You’ll also notice that on line 19, we run the DownloadPipelineArtifact@2
task, which in Azure Pipelines is a built-in task to download the pipeline artifacts published from a previous stage. The artifacts will end up in the default pipeline workspace directory, inside a folder named after the artifact name (which we set as a global variable): $(Pipeline.Workspace)/$(artifact.name)
.
Additionally, our trigger
will need to be different from what it was for the CI pipeline. Because the CD pipeline will push the Docker images for our application to production, we only want this to be triggered as a post-merge job against release branches such as release-v*
, and for new tags that are pushed to the repo.
We’ll also be setting the IMAGE_TAG
variable differently for the CD pipeline. For the CI pipeline, it made sense to use the git
commit to generate our tag, since each run was triggered for a merge to main
or as a pull request against main
. However, for the CD pipeline, we want our image tag to reflect the version of the application that we are pushing to production, and should thus set it based off of the branch name for which the build is being triggered. We can do this by using $(Build.SourceBranchName)
, which is a pre-defined build variable in Azure Pipelines.
Our entire CD pipeline YAML looks like this:
Notice that there’s still a lot of overlap in the steps between our CI and CD pipelines. For instance, the
test
stage is essentially identical in both. Thus, we could templatize some of these steps even further — for instance, having atest-microservice-app.yaml
template that we invoke in thetest
stage of both our CI and CD pipelines.Moreover, because you are reusing many of the same variables between both pipelines, you may also want to consider setting up a variable group in your Azure DevOps project that both pipelines will have access to.
Now that you’ve set up the CD Pipeline YAML, you can follow steps 6–8 to set up the CD pipeline in your Azure DevOps project, configure all the variables and secret variables, and give your pipeline a manual test run after naming it. Hopefully, the build succeeds:
Congrats, you’ve set up your Azure DevOps CI/CD pipelines for your Kubernetes microservice project!
Other Helpful Links and Resources
- Key Azure Pipeline Concepts: https://learn.microsoft.com/en-us/azure/devops/pipelines/get-started/key-pipelines-concepts?view=azure-devops
- Changing your Azure DevOps Project Visibility: https://learn.microsoft.com/en-us/azure/devops/organizations/projects/make-project-public?view=azure-devops
- Managing email notifications for your pipelines, projects, and organizations: https://learn.microsoft.com/en-us/azure/devops/organizations/notifications/manage-team-group-global-organization-notifications?view=azure-devops&tabs=new-account-enabled
- Scheduling pipeline runs: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/scheduled-triggers?view=azure-devops&tabs=yaml
- Generating your Software Bill of Materials (SBOM) on Azure Pipelines: https://bartwullems.blogspot.com/2022/08/azure-pipelines-generate-your-software.html
- Azure Pipelines Security Tips: https://learn.microsoft.com/en-us/azure/devops/pipelines/security/overview?view=azure-devops