Automated deployment to Azure Marketplace

Ben Deering
Tenable TechBlog
Published in
4 min readMay 7, 2019

With Tenable Core’s unofficial goal of making all Tenable products available everywhere we end up managing a large and increasing number of deployable images.

Note: These variants are not all currently built and many of them don’t even make sense. This just illustrates our team’s need for automation in deploying Tenable Core variants.

It is important to keep the process for getting software improvements out on all of these variants with as little hands-on work as possible. For items available on the Tenable downloads page, we have control over every step and making this automatic is not as difficult as distributing cloud offerings that need to clear the requirements of cloud providers.

Azure was the first cloud platform we addressed. It doesn’t support booting VMs from ISO which would be our first choice so getting a Tenable Core machine deployed to the team’s Azure account is a little more complicated. We build a VM in Hyper-V locally and boot our install ISO. Once the machine is built on HyperV we upload it to an Azure resource group that we prepare for each build.

We upload our images and create the Azure VM using Azure PowerShell (https://github.com/Azure/azure-powershell ). We create a Resource Group as a container to hold the new VM. We add a storage account to hold the disk image built on Hyper-V, a virtual network card and assign it to a Network Security Group (NSG) that defines traffic rules to allow traffic only to ports associated with the applications we installed.

An instance of our new VM is created and booted with an IP that is accessible by our automated tests. Once we have run our tests and confirmed the new VM works correctly, it is time to make it available to everyone on the Azure Marketplace.

A shared access signature (SAS) is needed to share our proposed machine with Microsoft QA and ultimately with the replication service that delivers it to various marketplace locations.

This SAS won’t work — there are container SAS’s and blob SAS’s. Normally only a blob SAS would include a full filename but publishing requires a container SAS with full filename. Azure GUI tools can provide the needed SAS but we want this to be automated so we have to modify the container SAS we receive.

This is as far as Azure PowerShell can get us. Microsoft provides a REST API that allows us to get from a tested Azure VM to an updated offer on the public marketplace.

https://docs.microsoft.com/en-us/azure/marketplace/cloud-partner-portal-orig/cloud-partner-portal-api-publish-offer

Since our other interaction with Azure was through PowerShell, our scripts for dealing with this API are also in PowerShell. We broke this into multiple scripts so that Bamboo could recover from a failure during the wait process or a failure to go live without manual steps.

Before we can do anything with this API we need to get an auth token.

Getting an auth token needs a client_id and client_secret. Setting up a service account and getting the information you need for this takes some digging.

https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal

https://docs.microsoft.com/en-us/azure/marketplace/cloud-partner-portal-orig/cloud-partner-portal-api-prerequisites

Once we have our auth token, we find the offer we are going to update. At the time of writing, Tenable has two offers on the Azure Marketplace: Tenable Core + Web Application Scanning and Tenable Core + Nessus. We find the one we are updating using this call:

GET https://cloudpartner.azure.com/api/publishers/<publisherId>/offers/<offerId>?api-version=2017-10-31

Next we update the offer. This could be used to update language for the marketplace but we just update the disk image.

PUT https://cloudpartner.azure.com/api/publishers/<publisherId>/offers/<offerId>?api-version=2017-10-31

Once the offer has been updated, we do the equivalent of hitting the publish button in the cloud partner portal.

POST https://cloudpartner.azure.com/api/publishers/<publisherId>/offers/<offerId>/publish?api-version=2017-10-31

The response from this request includes an operation id (response.operation_location) that we can use to monitor progress so we capture and export that for use by the next step.

Now the publish operation has started. Our new image needs to pass various automated and manual checks by Microsoft before it is eligible to replace the image currently on the marketplace.

We launch another script that polls the operation status to tell us when we can go live or if something broke. It would be possible to make this interrupt driven but would require our deployment project to be able to receive email.

GET https://cloudpartner.azure.com/api/publishers/<publisherId>/offers/<offerId>/operations/<operationId>?api-version=2017-10-31

Once the checks have passed we can go live with the updated image. This request returns the id of the go-live operation.

We start another script to monitor the go-live operation. This operation includes replicating our updated image to data centers around the world so that once it is live, customers anywhere can deploy quickly.

Once the go live operation succeeds (or fails) our deployment is complete.

--

--