How migrating to GitOps using Azure Pipelines made us faster and happier

Bryan Harvey-Smith
ASOS Tech Blog
Published in
9 min readSep 28, 2022

2020 was a difficult year, full of new challenges as we were all adapting to life working fully remotely. In many ways this was a trigger to change the approach that my team was using to build and deploy our applications. It started as purely a migration from one technology to another but ended up changing how we thought about our deployment approach.

Where we were

Like a lot of the Tech teams at ASOS, our build and deployment stack for applications and infrastructure involved TeamCity and Octopus Deploy. TeamCity was used for Continuous Integration and building the deployment packages. Octopus Deploy was used for provisioning Microsoft Azure resources and deploying the application packages. The deployments used a lot of custom step templates which were written by a central team of engineers. If a deployment failed in a non-obvious way, the team using the templates would likely have to reach out for help.

The problems we’d accepted

We’d been using these tools for years and the teams had accepted a number of problems:

  1. Our instance of Octopus Deploy was slow.
    ASOS Tech is big! We have a lot of projects and a lot of users. Our Octopus Deploy instance was very heavily loaded and the user interface was very slow. When we all started working from home it was necessary to access it via a VPN, which in turn slowed the VPN down for the whole business.
  2. Changing an Octopus Deploy variable could be terrifying!
    Our pipelines are built using shared Variable Sets. If you needed to change one, you needed to consider all projects that were using it. Getting it wrong meant that a project could fail to deploy months later.
  3. Making pipeline changes needed a lot of care.
    When you changed a pipeline, you changed it for everyone. A change needed to support your feature branch affected all deployments of that application for all branches, including release branches.
  4. Out-of-date Build Agent Tooling.
    Our TeamCity and Octopus Deploy agents were custom build agents maintained by the central team. When a new version of tooling such as Az, PowerShell or even Visual Studio was released, it could be months before it was available.
  5. Rotting Pipelines.
    By far the worst! The dreaded Update Step Button!
Screen shot of the Octopus Deploy Update Step button with the warning message prompting the user to merge in the latest changes

Our pipelines used a lot of custom step templates. Over time, changes were made to these to support changes in the underlying libraries or add new capabilities. However, if your application rarely changed, it may be months before you spot that a pipeline no longer worked. To resolve it, you’d have to risk the Update button — you never knew what changes you were about to pull in and if it would solve the issue. Your pipeline may be so out-of-date that it may not be compatible with the new changes and there was no going back.

We hadn’t noticed it, but these things were starting to slow us down and make us a little fearful of things going wrong.

A New Approach

ASOS Tech had recently adopted Azure Pipelines and was giving teams the choice to select the tool most suitable for them. Octopus Deploy and TeamCity were not going anywhere, but it gave us the opportunity to look for improvements. This new tooling allows you to define your build and deployment pipelines as code and commit them to source control (in our case Git) alongside your source code. We could see advantages in being able to change a pipeline on a feature branch and run it completely independently of the pipeline on the mainline. We decided to migrate a couple of components as a spike to see what lessons could be learned.

Hosted Agents

Instead of using custom build agents, we opted to use the Azure Pipelines hosted agents provided by Microsoft. These have a published set of tooling preinstalled and are kept up to date for you. There are several agent types available, and you choose the one best suited to the application being deployed. There is also the option to use self-hosted agents and base your image on the Microsoft image. This option is particularly useful if you need to control access to your components using a virtual network.

Task versioning

Like Octopus Deploy, Azure Pipelines provides dozens of task templates. Versioning is built into these as a suffix to the task name, for example “AzureFunctionApp@1”. Breaking changes would require a new version number to be issued, and pipelines would only receive the update when the definition of the pipeline is changed and committed back to the source control repository.

Preventing Library Hell

Our initial idea was just to port our existing projects from TeamCity and Octopus Deploy to Azure Pipelines. We found the equivalent to Octopus Deploy Variable Sets and we used it. However, we quickly found we had dozens of library groups and we were heading for the same problems as before.

We paused for thought…

Azure Pipelines also supports library files. These are YAML files that you commit to source control with your pipelines. You can even inherit one library file into another and override the values. Since they are also committed to source control you can track changes or change the value in a branch without impacting the mainline or releases. Perfect!

A listing of the library files that are stacked together to create a set of variables for a deployment

One of our fears when we started was how would we understand what settings would be applied to a deployment, in the absence of the Octopus Deploy variable preview screen. However, this fear was not realised as we have a consistent approach the library file pattern of inheritance. In the example above, a deployment to the test environment would layer the settings as follows:

  • app-nonprod-test.yml — settings unique to the test environment, for example the Application Insights Instrumentation Key. Settings in this file can override settings in the non-prod YAML file.
  • app-nonprod.yml — settings common to all non-production environments, for example common scaling levels. Settings in this file can override settings in the app.yml file.
  • app.yml — settings common to all deployments of the application.

It’s worth highlighting that secret values are not added to the YAML files that are committed to source control. Secret values are stored in Azure Key Vault, but the YAML files do control which Key Vault the application is using in each environment.

Adopting a GitOps mindset

The voyage of discovery that we’d started by porting our first application to Azure Pipelines made us consider our other deployment artefacts — namely Azure infrastructure. This was being deployed by a mixture of built-in and custom Octopus Deploy steps, that performed operations like deploying individual resources such as Storage Accounts, but we also used some ARM templates. To remove Octopus, we’d need to migrate all of these to something else — perhaps moving everything to ARM templates.

Enter Bicep

I hate ARM templates! They’re verbose, I make too many typos and it takes too many deployment iterations to get one to work.

Microsoft had released a new language for deploying resources called Bicep. It was supported out-of-the-box by the latest versions of Az CLI, supported all resource types and has a simple syntax. It’s stand out features though are the amazing plug-in for Visual Studio Code and the support for modules — meaning that you can break your infrastructure down into smaller files, whilst still allowing interdependency between resources.

This makes it much easier to locate resources that you need to change, but also makes it a lot less intimidating than reading a huge ARM template.

A collection of bicep module files that are split by Azure resource type

The auto-completion and edit-time verification of the templates by the Visual Studio Code extension means that creating the templates is incredibly fast and reliable. Achieving a successful first-time deployment of a newly defined resource is a reality — something I’d never encountered with ARM templates.

Since we are deploying these via Azure Pipelines we can also use the same variable file pattern that we use for deploying our applications.

File listing that shows the combination of bicep module files and the library files that are used to create the set of deployment variables

Versioning and Deployment

Now that both the infrastructure and settings are declared in files and committed to source control, we can treat this as the source of truth. Any changes we want to make would follow our established processes where an Engineer submits a Pull Request (linked to a work item) for peer review, when approved it is then completed to the mainline branch. Changes may not be made directly on the mainline branch as we have security policies in place to prevent this.

The Azure Pipeline for deployment is configured to run automatically on a commit to the mainline branch. The Pipeline contains steps that automate the creation of a Change Record in our Change Management system. We therefore have complete and fully automated traceability between the Change Record, the deployment, the Git revision that triggered the deployment and the work item that necessitated the change.

Improving flow and happiness

Confidence

By adopting the GitOps mindset and treating our infrastructure and configuration as code, the team has more confidence to make changes because they can safely do it within a branch. As the infrastructure, pipelines and variables are just code files in Git, they can use the normal tools to edit, compare changes and follow the usual pull request process as we do for our application code.

Eliminating Fear of Change

Two years down the line from our initial migration, we’ve eliminated the fear associated with changing variables and we’ve stopped pipelines from rotting.

We can pick up something that we haven’t deployed in a while and have confidence we could redeploy it today without the pipeline failing. I’ve just done this today for example, when migrating a component from .NET Core 3.1 to .NET 6.

The use of Bicep has allowed the team to describe our infrastructure clearly and deploy changes consistently through development, testing and production environments and regions.

Reduced blockers

Our previous model meant that teams needed a lot of support when build or deployment tasks failed. The failures were sometimes deep within the custom tasks, or down to the tooling of the agents, neither of which the team could directly change. This led to stories becoming blocked whilst waiting for a central engineering team to become available.

The new world has allowed the teams to resolve their own issues. Pipelines are simpler, using far less customisation and the team can edit any part of it.

With the configuration and infrastructure now being closer to the code, if you need to test something by connecting your application to a different service or with different scaling settings, you can edit the settings on your branch and deploy it. You no longer need to warn everyone that you’ve changed the settings in Octopus and remember to roll them back later.

Excitement

Stories requiring changes to our infrastructure are no longer treated with fear and trepidation, but instead as a chance to be amazed by the Bicep template deployment goodness.

Final thoughts…

When we began our migration from one deployment technology to another, I didn’t expect it to result in such a shift in how we thought about our infrastructure and deployments. It gave us the opportunity to look at what we were doing and consider new alternatives.

Looking back, would we do it again? Absolutely. The team has fully adopted the approach and started extending the original ideas to other activities, such as scaling. A task involving our infrastructure that we used to do by hand, is now performed through a Pipeline with library files.

But what about “lessons learned”? What would we do differently? Here are a few things that we learned along the way:

  • Favour Library files over Library variable groups (which are created in the DevOps portal).
  • Create a repository of shared variables, such as Azure region names or abbreviations to promote consistency across projects.
  • Create versioned, shared templates based around common operations, such as build, version and push a NuGet package.
  • If you declare a collection of Azure resources using Bicep modules, ensure that you test creating a new resource group containing those resources from scratch. This will verify that all the resource dependencies can be resolved correctly. If they can’t then it may be necessary to pass the resource Id as an output variable from the declaring module, to help the orchestrator define the correct resource dependencies.

Finally, I need to give a huge shout-out to my colleague Tom Scott who developed many of the patterns we’ve used. Thank you to everyone involved in the migration for all their hard work, flexibility, and resourcefulness.

Useful Resources

GitOps: https://about.gitlab.com/topics/gitops/

Bicep: https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/overview?tabs=bicep

Bicep Visual Studio Code Extension: https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep

Azure Pipelines: https://docs.microsoft.com/en-us/azure/devops/pipelines/get-started/pipelines-get-started?view=azure-devops

About me

Hi, I’m Bryan. I’m a Lead Software Engineer at ASOS. I’m passionate about Cloud technology and lead one of the teams that builds our highly scalable, globally distributed systems. I’m also an ASOS Tech Trainer for Docker, Kubernetes and AKS. When not at work, I can normally be found cycling around the countryside somewhere.

--

--