Azure DevTest Labs : Provisioning Test and Staging Environments

Bhagavati Kumar
12 min readFeb 25, 2021

Most Cloud Applications today are developed as a collection of co-operating Microservices and Serverless Components in the cloud.

In order to perform end2end functional testing of a specific Microservice developers need to be able to quickly provision a Dev/Test instance of the Microservice.

When the feature under development is complex and needs to interact with several other Microservices then typically one would need the ability to quickly provision a Staging Environment to perform the end2end functional test.

QA teams are another major consumer of Staging environments.

Longevity and Stress testing activities also need production replicas in the staging environments.

There are two ways people would generally go about creating a Dev/Test instance :

1. Locally on their laptop/desktop using docker containers to host the individual tiers or using minikube if the project is built on Kubernetes.

2. In the Cloud in some kind of sandbox environment created from a specific build of the Microservice. Usually there is a CI pipeline which builds an image as soon as the developer checks in his/her code. And then there could be an automated/manually triggered Release pipeline which the developer uses to create a Dev/Test Instance of the Microservice.

When the infrastructure is provisioned using Kubernetes the easiest way to provide Dev/Test and Staging instances for developers is to:

1. have a DEV and/or STAGING Kubernetes cluster separate from the Production cluster

2. Have Pipelines which can provision a Dev or Staging Instance and corresponding pipelines which can destroy a Dev or Staging instance.

I have used the kubernetes approach in my previous project while my current project is being built using Azure PaaS. So in this post my focus is on an approach for creating Dev and Staging environments for developers when the Cloud Application uses Azure PaaS components such as AppService, Azure DB for MySQL, Cosmos DB, CognitiveServices, Search etc.

SandBoxing

In the Context of Azure, assuming an Organization has signed up for a Cloud Account, it can create multiple Subscriptions under the Account (eg. Development, Staging and Production subscriptions). Instances can then be provisioned under those subscriptions. The Azure Architecture Blueprints suggests using such a scheme.

Image Source : Azure Architecture Blueprints

Within each subscription one can use Azure Resource Groups to create logical collections of virtual machines, storage accounts, virtual networks, web apps, databases, and/or database servers. Typically, users will group related resources for an application, divided into groups for production and non-production — but one can subdivide further as needed.

Image Source: parkmycloud.com

Scoping of Resource Names

There is still one problem that needs to be tackled. When creating multiple sandbox environments within a single subscription (one for each developer working in the project) the problem of unique naming within various scopes needs to be handled. A resource group is only a logical container into which Azure resources like web apps, databases, and storage accounts are deployed and managed. All Azure resource types have a scope that defines the level that resource names must be unique. A resource must have a unique name within its scope.

Image Source: Microsoft

This means we need to manipulate (prefix/suffix with for eg. the logged-in user’s name) to the resource groups that are being created by the Pipeline(s) and we cannot use a single Dev/QA/Staging Resource Group as the picture above seems to indicate. We also need to bring in this unique naming scheme to all PaaS resources with Public IP endpoints. For example the AppService WebAppNames, as the WebAppName contributes to the Public URL of the AppService using a convention like : https://<WebAppName>.azurewebsites.net

In contrast in a Kubernetes world one can use namespaces to the rescue. Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces. Namespaces are a way to divide cluster resources between multiple users (via resource quota). When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local. For example, if you have a Service called my-service in a Kubernetes namespace my-ns, the control plane and the DNS Service acting together create a DNS record for my-service.my-ns. This name will resolve to the cluster IP assigned for the Service.

Cost Control

If the pipelines allow developers to freely create instances at will then the number of resources under the account would increase at the same rate and the organization would have to pay a large bill to the public cloud provider.

The pipelines in a Kubernetes world could deploy the instances under a namespace which is a combination of logged-in username and a suffix of MicroServiceName (eg. johndoe-servicexyz) for Dev/Testing environments and for Staging it could be username and fixed suffix of -staging ( eg. johndoe-staging). This limits a developer to one Dev/Test instance per-microservice and a maximum of one Staging Instance.

To achieve the same thing with Azure PaaS one would have to manipulate the resource group names as parameter overrides if the ARM template references the resource group. Many publicly available useful ARM templates are written to be just deployed to the target resource group specified externally in the deployment command ( — resource-group ExampleGroup). If using Terraform again the resource group name would have to be templatized. By following conventions similar to those mentioned above (for Kubernetes namespaces) we might be able to limit a developer to one Dev/Test instance per-microservice and a maximum of one staging instance. However this could imply changing existing ARM templates or Terraform Scripts as the case may be and providing parameter overrides in the pipelines at appropriate places. If it is a Complex Multi-VM ARM template the process can get tedious and error-prone.

Is there a Simpler Alternative ?

Enter Azure DevTest Lab Environments!!.

Azure DevTest Labs is a self service sandbox environment that enables developers on teams to efficiently self-manage virtual machines (VMs) and PaaS resources without waiting for approvals, quickly create Dev/Test environments while minimizing waste and controlling costs. Azure Policy can add additional rules for the size and number of App Services or PaaS databases to limit costs.

The DevTest sandbox can also be configured for On-Premises connectivity via Azure ExpressRoute or Site-Site VPN. This allows access to Enterprise SecureZone Services during development and it can force all development network traffic in and out of the cloud environment through an on-premises firewall for security/compliance.

The main idea of DevTest Labs is about VM’s and Custom Images and Policies around various VM parameters, controlling number of VM’s per-user and the concept of Claimable VMs from a pool of provisioned VMs. So it is all about providing Development Machines with right configurations to the Developers. However there seems to be an additional new concept called Environments which caught my eye.

In DevTest Labs, an environment refers to a collection of Azure resources in a lab. The Azure DevTest Lab FAQ contains the following question and its answer :

How can I use Resource Manager templates in my DevTest Labs Environment?

You deploy your Resource Manager templates into a DevTest Labs environment by using steps mentioned in the Environments feature in DevTest Labs article. Basically, you check your Resource Manager templates into a Git Repository (either Azure Repos or GitHub), and add a private repository for your templates to the lab. This scenario may not be useful if you’re using DevTest Labs to host development machines but may be useful if you’re building a staging environment, which is representative of production.

It’s also worth noting that the number of virtual machines per lab or per user option only limits the number of machines natively created in the lab itself, and not by any environments (Resource Manager templates).

This blog post also discusses how to create multi-VM environments from your Azure Resource Manager templates.

So the question in my mind was, can I use the Initiating user’s name/id as the environment name or <username>-<ApplicationName> as the environment name. And similarly prefix the WebAppName again with the logged in user’s name (for making unique public URL’s) to achieve the required Sandboxing as well as cost limiting. Of-course i am using the term username loosely here, it has to be an attribute of the logged in user that is unique (such as userId or emailId without the @domain suffix).

My initial experiment with a simple ARM template seems to indicate this would work without having to touch any of the publicly available ARM templates but just supplying some parameter overrides in the azure release pipeline for overriding the WebAppName for example.

The Environment deployment internally creates a unique resource group of the form <LabName>-<EnvironmentName>-<SomeUniqueId>. The environment appears to be a shim over resource groups in some sense.

Whether this paradigm would work in a complex Multi-VM ARM Template scenario is something that I am on the process of discovery and would like to get feedback from experts out there on whether this would run into problems similar to what I alluded in the Scoping of Resource Names section. If I find out any problems with this approach (any corner cases) I would update this post accordingly.

The rest of this post is focused on step-by-step creation of a DevTest Lab in an Azure account and Provisioning multiple Dev Environments within the Lab using Resource Manager Templates. The Application itself is basic HelloWorld Node.js/Express application generated using npx (npx express-generator). I can upload the code if someone is having difficulty in creating this helloworld App.

  1. Create an Azure Account with a Subscription, I used a Free Trial Account for this experiment
  2. Create a DevTest Lab Resource with a new Resource Group and LabName.
Create a DevTest Lab Resource on Azure Portal

3. In My Case the Lab looks like this (name MyDevTestLab and Resource Group DevTestLabRG)

DevTest Lab named MyDevTestLab created under Resource Goup DevTestLabRG

4. Create a project repository in Azure DevOps for you Helloworld Express App and Create another project repository where we would store the (potentially customised) Azure RM Templates for each of your Microservices. To Begin with it would only contain the linux web app template . I named my repositories as helloworld and azure-devtestlab respectively. And you will be creating them under your default Azure DevOps Organization.

Projects helloworld express app and ARM templates under azure-devtestlab

5. The contents of the azure-devtestlab is exactly the files present in (https://github.com/Azure/azure-quickstart-templates/tree/master/101-app-service-docs-linux) under a directory named ArmTemplate

Content of ARM Template 101-app-service-docs-linux

6. Setup a CI Pipeline for helloworld project by adding the following azure-pipelines.yml to the top level directory in your helloworld project repository and push the change to your remote git repository.

# Node.js Express Web App to Linux on Azure# Build a Node.js Express app# Add steps that analyze code, save build artifacts, deploy, and more:# https://docs.microsoft.com/azure/devops/pipelines/languages/javascripttrigger:- mastervariables:# Azure Resource Manager connection created during pipeline creationazureSubscription: '948dbf02-d434-4b04-bfa2-2383269fc602'# Web app namewebAppName: 'mydevtestapp'# Environment nameenvironmentName: 'mydevtestapp'# Agent VM image namevmImageName: 'ubuntu-latest'stages:- stage: BuilddisplayName: Build stagejobs:- job: BuilddisplayName: Buildpool:vmImage: $(vmImageName)steps:- task: NodeTool@0inputs:versionSpec: '10.x'displayName: 'Install Node.js'- script: |npm installnpm run build --if-presentnpm run test --if-presentdisplayName: 'npm install, build and test'- task: ArchiveFiles@2displayName: 'Archive files'inputs:rootFolderOrFile: '$(System.DefaultWorkingDirectory)'includeRootFolder: falsearchiveType: ziparchiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zipreplaceExistingArchive: true- upload: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zipartifact: drop

7. Setup a single stage, 2 task release pipeline for helloworld project with artifact dependency from the CI pipeline

The first task in the Pipeline uses the (PreView) Create Azure DevTest Labs Environment feature. We specify the lab name we created earlier, specify the username as a variable reference. We also specify the repository for ARM template and the Template Name 101-app-service-docs-linux as discussed above. The Parameter override for the ARM template tries to specify override for the WebAppName parameter. If you are using a free-trial Azure Account then when creating a second Environment for a different user, you would need to create it in a different azure region. So add the additional override parameter below as -location ‘West US’ separated by a single space from the existing parameter : -webAppName ‘$(userid)-$(MyAppName)’ -location ‘West US’

Task create Azure DevTest Lab Environment

If you plan to use Terraform for automation then be aware that the Create DevTest Lab Environment task is not yet supported in AzureRM terraform provider as of 24th February 2021. Terraform does support creating a DevTest Lab though.

The second task is essentially deploying the zip archive created by the CI pipeline into the deployed AppService (from our template : 101-app-service-linux). Notice the AppService Name should be identical to what was specified above as the -webAppName. Notice the package folder for the zip archive created by the CI pipeline. For some reason there is a double drop /drop/drop (need to fix the CI pipeline yml to ensure a single drop directory is created).

Deploy AppService Image Drop to AppService created under the Environment.

Note that the second task is not really Environment or DevTestLab aware. It just uses the AppService Name as the link from previous step. IMO What might ideally be needed is an Environment aware AppService Deploy. So if i figure out any issues or corner cases with the whole approach proposed in this post, this could be one of the root causes.

Although we need to use the LoggerIn User available as system variables in pipelines, for the purpose of this experiment I have used two custom variables.

And I manually updated the variable userid to deploy two environments as shown below with names Kumar and JohnDoe.

Two Dev/Test instances of helloworld App deployed under two DevTestLab Environments : Kumar, JohnDoe

Navigate into any of the environments and you would see the AppService and AppService Plan that got created using our custom ARM template used in the Release Pipeline. Finally you can navigate to the AppService resource to see details of the AppService and its deployed URL.

Details of the AppService deployed under the MyDevTestLab Environment named JohnDoe

Clicking on the Browse link or the URL should actually run the application and produce the following response in your browser.

Cleaning Up to Save Costs

Deleting the environments deletes all the resource groups and resources that were provisioned under the environment. Thus one does not need to have an elaborate script to cleanup a provisioned Dev/Test instance. The Azure Pipelines support a Delete Environment Task. It would have been nicer if autoshutdown policies were also applicable to Environments in DevTestLabs and not just for VM resources.

Summary

Using Azure DevTest Labs Environments we were able to provision multiple Dev Instances of a MicroService (helloworld :-) ). By following conventions in creating environment names one can limit the number of instances a specific user can create per-project or globally thereby keeping a tab on costs.

While it appears to work in principle, whether this approach would run into any corner cases is something I am looking for as feedback and I would also be trying out complex ARM templates in the coming weeks and will update this post based on my findings.

In my previous company we have deployed DEV/Staging Kubernetes environments as described above and in my current company I am working on an Azure based Cloud Application hence decided to post my discoveries.

Reach out to me at kumar.jayanti@gmail.com if you are trying out the steps and are looking for the source code of any of the projects above or If any of the steps are unclear.

--

--

Bhagavati Kumar

Architect with varied interests and experience in Security and Identity Management, IOT, Blockchain, Cloud Native Services, Distributed Systems and Applied AI.