A Build and Release pipeline in VSTS for ASP.NET Core, Docker and Azure Kubernetes Service (AKS)

Marco De Sanctis
10 min readJul 29, 2018

NOTE: I’ll be speaking at the next IT/Dev Connections conference in Dallas. If you want to join me and have a chat about ASP.NET, Docker, Kubernetes and Azure, use the DE SANCTIS code to get a discount on the conference fee.

Part 2: Integrate Cosmos DB (and other PaaS Services) to AKS in Azure DevOps

During the last few months, the offering in Azure for container based applications has improved dramatically: today we can privately host our images in Azure Container Registry, to run them either in a serveless or in a PaaS fashion, or we can set up our managed Kubernetes cluster in the cloud in literally minutes.

On top of that, there’s an incredible and continuously evolving support in Visual Studio 2017 for container based applications, thanks to the Visual Studio Tools for Docker and the upcoming Azure Dev Spaces. In a nutshell, for a .NET Developer, there’s never been a better time to approach Docker and Kubernetes!

Visual Studio Team Services makes no exception, as it nicely integrates with pretty much all the technologies that we’ve mentioned before. In this article we’re going to explore a possible approach to

  • create a Build definition for an ASP.NET Core project (although it potentially works with any technology) and use it to generate a Docker image;
  • store the image in Azure Container Registry, and keep a history of the previous versions;
  • create a Release definition that deploys all the images in our system to an Azure Kubernetes Service cluster.

The good news is that you can do all of that using functionalities which are available out-of-the-box. You will see how simple it is!

Docker is our build agent

One of the most powerful functionalities of Docker is called multi-stage build, and it essentially allows you to leverage multiple source images during the build phase of your custom image.

In order to better understand this concept, let’s take a look at the Dockerfile that Visual Studio 2017 generates for an ASP.NET Core application:

An example of a Dockerfile for ASP.NET Core

It basically references two separate images:

  • microsoft/dotnet:2.1-aspnetcore-runtime is the one that contains the runtime for ASP.NET Core. It’s good for running our application and extremely small (it’s only 250MB);
  • microsoft/dotnet:2.1-sdk contains the full DotNetCore 2.1 SDK. It’s much larger (approximately 2GB) and therefore unsuitable for distribution. However, it contains everything we need in order to build (and potentially test) our application.

We won’t delve into the details of the aforementioned Dockerfile. It’s sufficient to say that, when we run a docker build command, we essentially

  1. create an empty image out from the runtime (~250MB)
  2. create an empty image out from the SDK (~2GB)
  3. copy the source code into the SDK image
  4. build the code
  5. copy the build result to the runtime image

In other words, that Dockerfile is a self contained build definition, which runs in Docker daemon, and doesn’t even require any SDKs to be installed!

Docker will automatically download all the dependencies it needs to properly build and package our application. Yes, you got it right: no more headaches managing and maintaining the SDKs installed in our build agents.

Isn’t it beautiful? This has also a profound impact on how simple our VSTS pipelines become. And that’s exactly our next step.

The Build definition in VSTS

With the whole building logic embedded in the Dockerfile, there’s really little left to do in a VSTS Build definition, apart from triggering a Docker build and publishing the image to a registry.

Visual Studio Team Services comes with an official Docker task which basically is a wrapper around the Docker CLI, allowing us to send commands to the engine.

The Docker task in VSTS sends commands to the Docker CLI

This makes the whole Build definition extremely simple, and only made of two steps — again, one for building the image, one for pushing it to a registry.

The build definition in VSTS

One crucial aspect — that we often keep at its default value in VSTS — is the operating system of the build agent that we are going to use. In our case, we are building Linux images, therefore it’s imperative to use a Linux build agent.

In order to speed up build times, Docker makes extensive use of cached images. However, a hosted agent like the one in the figure gets reprovisioned every time it’s allocated. Therefore, there will never be any cached image to reuse. My recommendation is to spin up your custom build agent — it could just be a Linux Virtual Machine in Azure. I’ll blog about it in the future.

Let’s have a look at the configuration for the Build Backend image step:

The “docker build” step in the definitinon

As we can see from the picture above, the Docker task nicely integrates with every Docker registry, including Docker Hub and Azure Container Registry. In our example we have:

  1. set the Azure Container Registry as our target, by selecting the Azure Subscription and the registry name. In order to publish to different registry, such as Docker Hub or a generic Docker registry, we must configure it as a Service Connection Endpoint;
  2. set the action to “Build an image”, which basically trigger a docker build command;
  3. selected the Dockerfile that we want to build. Also note the --pull optional parameter, which ensures that Docker pulls any new version of the base images from the Docker Hub, if available;
  4. configured the Image name to include the BuildID. This will ensure that we keep a history of all the artifacts that we have produced.

The Docker Push step is similar, the only notable difference being the Action that this time is set to “Push an image”:

The push step of our build pipeline

That’s it for the Docker images. Regardless to how many microservices we have in our solution, setting up a build pipeline for them is just a matter of replicating the two tasks above.

However, there’s still one step to do before we can move on to the Release definition.

Preparing the Kubernetes YAML file

Thanks to what we’ve just done, every time Visual Studio Team Services successfully completes a build, we will see a new image being stored in Azure Container Registry, properly tagged with the build ID.

But what about the application as a whole? As we’ve mentioned at the beginning of the article, we are going to use a Kubernetes cluster in Azure.

Kubernetes uses a declarative way to describe which containers an application needs and how they are configured and exposed, in the form of a YAML file. This is effectively another portion of our source code, it can stay in a repository on its own and have its build definition.

This time, all we have to do is publishing it as a build artifact, in order to be able to reference it later in the release.

The build definition for the YAML files for Kubernetes

As you can see from the figure above, the build steps

  • copy the YAML file(s) to the Artifact Staging Directory;
  • publish the Artifact made out of this directory.

Much more interesting than that, it’s actually the content of that YAML file. Let’s have a look at an excerpt of it:

apiVersion: v1
kind: Namespace
metadata:
name: #{Release.EnvironmentName}#
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
spec:
replicas: 1
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: myreg/backend:#{Release.Artifacts.Backend.BuildId}#
... cut ...
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 3
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: myreg/frontend:#{Release.Artifacts.Frontend.BuildId}#
... cut ...

The key point here is that we are using a tokenised version of the YAML file, so that we can replace these tokens with the actual values during the release phase. More specifically, in the example above, we have set

  • a token for the Environment name, in order to have multiple instances of the whole application coexisting into the same AKS cluster;
  • a token for the image referenced by each container: this will allow us to select the version of the image that we want to roll out on a given environment.

We finally have all the bits and pieces to deploy our application. It’s now time to create a release pipeline for it.

Configuring the AKS cluster in VSTS

In order to connect to our Kubernetes cluster, we must register it as a service connection in our project in Visual Studio Team Services.

Let’s head to Setting>Services, then click on “New Service Connection” and select Kubernetes from the dropdown list.In the dialogue that shows up, we must configure a couple of security options in order to be able to connect.

Adding an AKS cluster to Visual Studio Team Service

The way to retrieve those values is Kubectl, which we probably already have installed on our machine and connected to the cluster. We can type kubectl config view to retrieve the Server URL of our cluster:

apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://XXXX.YYYY.azmk8s.io:443
name: desakswe
contexts:
... cut ...

The next textbox to fill up is called KubeConfig. For this one, we have to paste the content of our kubeconfig. This is the file in which Kubectl maintains the connection data to the cluster it has access to. It’s stored in a folder in our local machine which usually is:

  • ${HOME}/.kube/config for Linux systems
  • %UserProfile%\.kube\config for Windows machines

Last but not least, don’t forget to tick the Accept Untrusted Certificates checkbox.

We can then use the Verify Connection button to check if everything works. If it does, we are finally ready to deploy some containers into our Kubernetes cluster!

Creating the Release definition for AKS

With all the bits an pieces lined up, we now can create a Release definition. Let’s start by referencing all the Build artifacts that we have — including the provisioning — and defining our first environment, called “staging”.

Our release definition encompasses all the build artifacts

Guess what? Even in the build definition we only need two steps to implement— I swear I didn’t do it on purpose :) — as you can see from the picture below:

The release involves only two steps

The first step uses a Replace Tokens task, since we want to modify the application’s YAML file that we created before and replace tokens such as the Build IDs with the artifacts that we’ve selected for the release.

The second step is a Deploy to Kubernetes task, which will apply the changes to the AKS cluster in Azure.

The Deploy to Kubernetes task allows us to control the deployment from VSTS

This task basically executes a kubectl command against the targeted cluster, based on the following settings:

  1. The Kubernetes service connection points to the cluster connection that we have configured during the previous section;
  2. The Kubernetes Namespace where we deploy our assets is taken from the environment name. This is also included as an object in the YAML file that we’ve seen before, because we have to create it if it doesn’t exist;
  3. Deploy to Kubernetes supports a number of built in commands of Kubectl and, in our case, we are selecting apply;
  4. The arguments sections specifies the YAML file that represents our system

This is it! When launching a new release, we can select the image versions we want to deploy and let Kubernetes figure out which components must be updated and which policies it has to apply during the operation (e.g. a rolling update policy defined in the Deployment object).

Conclusions and next steps

During this article we’ve explored how to create a simple build and release pipeline in Visual Studio Team Services to deploy a microservices based architecture into an Azure Kubernetes Service cluster.

The example was based on ASP.NET Core and the Visual Studio Tools for Docker. This allows us to define the build logic entirely within the Dockerfile generated by Visual Studio, making the Build Definition just a matter of triggering a Docker Build process and pushing the generated images to an Azure Container Registry.

After that, we’ve investigated how we can register a Kubernetes cluster on our VSTS project and we’ve presented a simple Release definition to deploy containers into it.

This implementation is a good starting point for a real world scenario; however it lacks a couple of important aspects, that we’ve kept aside for the sake of simplicity:

  • How to handle configuration and secrets, that might be changing between environments and, more in general, how to integrate it with the PaaS offering in Azure;
  • How to execute unit and integration tests during the build phase

I’ll talk about these topics in the next articles.

--

--