Deploying .NET Applications to Azure Web Apps for Linux Containers

Ali Mazrae
ClearBank
9 min readJul 11, 2022

--

As we look to scale and expand ClearBank, we need a hosting solution that will grow with us. We want the flexibility and benefits that container orchestration tools provide; replacing workers at the slightest hint of malperformance, canary deployments, one minute deployment times — all that good stuff that comes as standard with Azure Kubernetes Service (AKS).

Whilst we’re working on building out a production-ready AKS cluster, we decided to try out one of Azures PaaS offerings that operate under the ‘chuck me an image and I’ll run it’ principal. App Services felt like the perfect platform to help us get something production-ready in no time, whilst still maintaining some level of control.

In this post, I share how to build and release a .NET application to an Azure Web App for Linux Containers, with secure authentication between services using Managed Identities.

Prepare the Application

At the heart of the operation is the Dockerfile, effectively a script with all the commands required to assemble an image. Microsoft have a default one when creating a new app, which is a great base to start from. There’s so much documentation about this online, so I won’t do any injustice to it and just link some instead. Docker have some good examples, or check out the Microsoft docs.

The main implementation detail worth discussing here is in the setup of our C# app. Images tend to be built once and pushed to many different environments, which requires environment specific config to be separated from the app itself and instead provided by the hosting platform. In the case of App Services, config is exposed as environment variables, so we need to make sure the app consumes them.

They can be added to the configuration when building the host…

hostBuilder.ConfigureAppConfiguration(config =>
{
config.AddEnvironmentVariables();
})

.. and then accessed as normal with the IConfiguration object

var configValue = configuration.GetValue<string>("Another:ConfigValue");

Provision the Container Registry

We use HashiCorp Terraform for managing our Infrastructure as code, which, with the Azure Terraform Provider, makes provisioning resources in Azure fairly pain free. The two main pieces of infrastructure we need are the App Service itself, and a private container registry to store our images.

The Terraform documentation is great, so I won’t go through every line of code required for this, instead I’ll focus on the more interesting parts.

To begin, we provision a resource group and a private Azure Container Registry (ACR).

resource "azurerm_resource_group" "acr" {
name = "acr-resource-group"
location = "uksouth"
}
resource "azurerm_container_registry" "acr" {
name = "IStoreImages"
resource_group_name = azurerm_resource_group.acr.name
location = azurerm_resource_group.acr.location
sku = "Standard"
}

Next, create a service principal with DevOps Service Connection, and assign the ‘AcrPush’ role definition. This is needed later when we start pushing images there.

resource "azurerm_role_assignment" "azure_devops" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPush"
principal_id = azuread_service_principal.azure_devops.id
}

Provision the App Service

Every App Service is assigned to an App Service Plan which defines the compute that the app has access to. This is where you can do some research on if you should create a Windows or Linux based App Service Plan, here’s a helpful post. Linux tends to be the standard with containers, but if your app is targeting .NET Framework then Windows is your only option.

Next up is the App Service itself, and for safer deployments, we provision and use a staging slot alongside the ‘production’ slot. The idea of the staging slot is to push the updated image and configuration there, test that it is healthy, and then swap the slots. Then the healthy, tested, and up to date staging slot becomes the production slot and starts handling requests. In case of a bad deployment, using a staging slot can massively reduce the time to restore as it’s just another slot swap to get the previous version of the application running in Production again.

As we only make config changes on the staging slot, we tell Terraform to ignore all app_settings on production slot.

// production slot only
lifecycle {
ignore_changes = [
app_settings
]
}

The final thing we need to add here is the permissions for the staging slot to pull an image from the ACR we created. This can be done by creating a service principal with Client ID and Secret and setting them on the configuration like this, but we prefer to use Managed Identities. First, we tell the app service we want a system-assigned identity for both slots.

// both slots
identity {
type = "SystemAssigned"
}
site_config {
acr_use_managed_identity_credentials = true
}

Then we update the ACR to give the ‘pull’ permission to each identity.

resource "azurerm_role_assignment" "pull_acr_image" {
scope = azurerm_container_registry.acr.id
principal_id = azurerm_app_service.app_service.identity.0.principal_id
role_definition_name = "acrpull"
}
resource "azurerm_role_assignment" "pull_acr_image_slot" {
scope = azurerm_container_registry.acr.id
principal_id = azurerm_app_service_slot.app_service_slot.identity.0.principal_id
role_definition_name = "acrpull"
}

Whilst the staging slot pulls from the ACR during a release, the production slot requires permission in case of app restarts, platform upgrades etc.

At this point we’ve created a private ACR to hold our images, an App Service to host them and provided secure access to pull any image we want to from the ACR. Now we need something to run!

Configure the App Service

We still like to keep environment agnostic configuration in the appsettings.json file, but variables that change between environments need to be added to the app settings of the App Service staging slot.

// staging slot
app_settings = {
APPLICATIONINSIGHTS_CONNECTION_STRING = var.ai_connection_string
Another__ConfigValue = "config"
}

Notice how when we add values to the app settings, any colons (‘:’) are replaced by double underscores (‘__’), but then can be accessed as normal in-code. This is only necessary with Linux containers.

I won’t go into too much depth here, but as an extra security measure we store sensitive variables in a Key Vault, then reference the entry in the app settings. App Service authenticates using Managed Identity, then picks up the variables and exposes them to the app as normal. This extra layer of security means a human can’t go and read sensitive values in the app settings, and also stops them being exposed in the Terraform plan. Read more about it here.

Build the Image

At this point, we have a Dockerfile, our container-ready application, and the infrastructure required to run it — all we need is some repeatable way to get it there. Azure DevOps already has some Docker tasks which make this easy. If you want to run any in-memory tests against your solution (which you should!), now is the time to do it; build the projects as normal, run the tests and publish any artifacts you might want later down the line. For us, we build and publish some integration tests that we run after deployment.

Then, we authenticate with the ACR using the service connection we created earlier and use the Docker Build and Push task to get the image into the repository. Make sure the Dockerfile is in the root of the repo so it’s picked up by the task.

- task: Docker@2
displayName: Build and Push
inputs:
command: buildAndPush
containerRegistry: acrServiceConnection
repository: net-application # name of repo in ACR
tags: $(Build.BuildNumber)

If you need to pass any arguments into the Dockerfile, the tasks can be separated into two and arguments passed into the build task. Make sure to pick something meaningful for the image tag. If one isn’t specified, the default is ‘latest’ and images are always overwritten in the ACR with the most recent build. I picked the build number as an example, as it makes the image identifiable in the ACR. Ideally some more effort should go into the tag, something like ‘(BuildNumber)-(ShortGitHash)-(BranchName)’ ensures images are never overwritten and when we read the branch name, the origins are fairly obvious.

Run the Application

Now you can go and tell your App Service to run the image! If you’re too excited, manually editing the app settings to the path of the image will pull and run it in a container but hold on to your horses — let’s do this properly.

It’s worth stating our end goal of the deployment at this point, so we can discuss potential ways to get there. All we want to do is update the configuration on the staging slot to contain a Key-Value pair with the path of the image we just pushed, check its health, then swap it to the production slot.

One method could be to consume the image tag as a Terraform variable and then build up the path to it by referencing the ACR resource. Terraform would apply the change, App Service will pull the image, run it and we’re happy.

Sure, that works, but we don’t really like it. For us, Terraform manages our infrastructure, and performing the final step to pushing out an application release using it feels wrong. We like to separate infrastructure changes and app deployments into different stages. It just happens to be in this situation that the app release requires an infrastructure update.

This method would also require some hacky Terraform ‘null_resource’ blocks to swap the slots by running Azure CLI commands.

Instead, we can use the AzureAppServiceSettings@1 DevOps task to set the image name on the staging slot app settings.

- task: AzureAppServiceSettings@1
displayName: update image path in app settings
inputs:
azureSubscription: # service connection with permissions to perform operations
appName: "MyAppService"
slotName: "staging"
resourceGroupName: "MyResourceGroup"
appSettings: |
[
{
"name": "DOCKER_CUSTOM_IMAGE_NAME",
"value": "myregistry.azurecr.io/net-application:1.0.1-za12f36-package-updates",
"slotSetting": false
}
]

The image will be pulled, then we start up the slot.

- task: AzureAppServiceManage@0
displayName: start staging slot
inputs:
azureSubscription: # service connection with permissions to perform operations
Action: Start Azure App Service
SpecifySlotOrASE: true
WebAppName: "MyAppService"
ResourceGroupName: "MyResourceGroup"
Slot: "staging"

Now we tell the app service we want to swap the staging slot into production. It waits until the staging slot is up and running by calling its health endpoint, and only completes the swap when the app is ready to serve requests.

- task: AzureAppServiceManage@0
displayName: swap slots
inputs: # other inputs identical to above
Action: Swap Slots

Finally, we stop the old application running which is now in the staging slot — important if it’s a worker process, less important if it’s an API.

- task: AzureAppServiceManage@0
displayName: stop staging slot
inputs: # other inputs identical to above
Action: Stop Azure App Service

We’ve done it! An automated build and release pipeline for our new containerised .NET application, running in a Linux container hosted in an Azure App Service.

Image Promotion and Multiple Environments

Up to now, I haven’t mentioned much about setting this up in different environments. Most of the changes to make are just parametrising everything (and hoping for the best). There is one problem however, which relates to the whole ‘build once run anywhere’ principal often referred to with containers.

We spotted a potential risk with giving our App Service in Production the permissions to pull and run any image from the same ACR that we chuck all our development images in. They’re untested, not production ready and, as is the nature of dev environments, some of them just won’t work. A potential slip on the keyboard whilst updating the app settings and our Production service will just run who-knows-what.

To avoid this scenario, we decided to create one ACR per environment, so each environments app service is only aware that one ACR exists, and we copy images between them in the release pipeline, using a promotion pattern.

To start, we build and push the image into a ‘shared’ ACR. Just before the release to a development environment for testing, the release pipeline copies it into the ACR for that environment using the Azure CLI ‘az acr import’ command.

az acr import --name ${{ target_registry_name }} --source ${{ source_image }} --registry ${{source_resource_id }} --force

Once we’re happy with the code and it’s merged to master, the image is rebuilt from the master branch and tagged with something sensible.

Our production releases are run with templated tasks which enforce the Path to Live. Whenever a deployment to production is kicked off, it must be deployed to Staging and have all regression tests passing. We utilise this enforcement in image promotion, so instead of having each release reach into the ACR of the environment ‘below’ it, the image can just be copied from the shared ACR.

Path to Live

This whole effort results in a Prod ACR containing only Production ready images, that have been fully regression tested in lower environments and can be trusted to run in Production. Again, it’s simply a safety mechanism to ensure that no ‘dirty’ images can be run in Production.

Summary

This has been a fairly high-level overview of how we automate safe deployments of .NET applications to Production using the Web App for Linux Containers technology. With this approach, we’ve built a solid base as we prepare for AKS, and we’re now enjoying some of the great benefits of containerisation;

  • Decreased lead time with 5-minute deployments
  • Decreased time to restore with deployment slots and managed Infrastructure as code
  • Increased security with the use of Managed Identity and Service Connections
  • Increased reliability with an image testing and promotion mechanism

--

--