You, Me, & CI/CD: Deploying Azure Analysis Services Tabular Models through Azure Pipelines

Liam Dunphy
The Startup
Published in
8 min readOct 10, 2019

On one of my recent projects I was tasked with automating our existing manual deployment process for Azure Analysis Services (AAS) Tabular Models. If you have previous experience in deploying other common Data Platform components such as Azure SQL DB, you might think (wishfully!) that this would be as simple as selecting a preconfigured task on the Release Pipeline to handle a DACPAC deployment…

Oh how we took you for granted, beautiful DACPAC Deployment task!
Oh how we took you for granted, beautiful DACPAC Deployment task!

Unfortunately AAS Tabular Models provide a unique challenge for CI/CD via Azure Pipelines as they do not currently have a native task supported in either the Build or Release pipelines. To get around this we must come up with a different approach — first up, let’s see what we can cook up ourselves.

First Attempt: Doing it Yourself

At the core of the AAS Project is the Model.bim, the file which contains all the important information the model needs to function correctly — data sources, measure definitions, roles and more. Functionally, models at or above compatibility level 1200 (and therefore all AAS Tabular Models) have their model structure expressed in JSON. This is great for us because it opens up simple file editing using Powershell!

Let’s start off with something simple — we can read in the contents of our model into a variable, converting it from its default JSON structure into a PSCustomObject.

Once we’ve done that, updating existing properties is trivial — we can access them using the dot notation to traverse the object and update individual properties as needed. This is particularly useful for updating the “Data Source” connection string as this typically varies depending on environment. We can also remove any properties that we don’t want to keep, for example the “Roles” property on the model as we may already have preconfigured roles setup across our deployed models.

So we currently have our updated model being held in memory, but we now need to deploy it. Since we are already using Powershell and our model is above compatibility level 1200, we can leverage Tabular Model Scripting Language (TMSL) — an incredibly powerful syntax that expresses AAS models in JSON and allows us to execute commands to create, alter, or delete existing models.

Seeing as we want to handle deployments, we are going to make use of the CreateOrReplace command. The syntax is incredibly straightforward — we already have the Model.bim in memory which is the exact syntax the CreateOrReplace command is looking for in its database attribute.

Deployment of TMSL commands is handled through the Invoke-ASCmd cmdlet — allowing us to execute any XMLA (including TMSL), MDX, or DMX statement against an AS Model of our choosing.

We have made a couple of key assumptions with the code above, namely:

  • The Az-Context is already set. Because this is being deployed in Azure Pipelines we are using Service Connections to ensure that our Azure Powershell tasks are already instantiated in the correct context, saving us from having to specify this in our code. If you don’t have this setup, you need to ensure that there is a call Connect-AzAccount so that you are authenticated.
  • Service Principal is the chosen authentication method, since it works non-interactively within Azure Pipelines. If you wanted to test this code locally and didn’t want to use a Service Principal, you could use your own e-mail assuming you have permissions to set yourself as an admin on the AAS instance. This will initiate the interactive sign in flow.

N.B. When I was testing this code I ran into errors using Powershell 6 as the sqlserver module was not correctly importing the Invoke-ASCmd cmdlet. Changing to Powershell 5 solved this.

After executing this code we have successfully deployed our model!

While we have shown that this Powershell deployment does work in principal, it’s still obvious that a lot of configurations and tweaks would need to be made in order to support running this across multiple models — that’s not to say we couldn’t do it all in PowerShell, but supporting all the different environments and edge cases could get a little tricky!

Not only that, but what if we wanted to support different types of outputs depending on whether it is a build or a deploy? In our current example we go ahead and deploy the updated Model.bim, but what if we wanted to export the .bim, or the TMSL script we had made instead? Pondering all this led me to my next discovery…

Second Attempt: Introducing Tabular Editor!

Tabular Editor has to be one of my favoruite tools for interacting with Tabular Models. Unlike editing within Visual Studio, you can make changes directly against the Model.bim file on your machine without needing to deploy it to a Workspace Server. Not only that, but you can connect to deployed models, interrogate them, make changes, and deploy your changes all from within the GUI.

Given how great it is, it is probably not much of a surprise then to find out it has a very robust and powerful command line executable which is perfect for our requirements! Let’s take a closer look…

The big take away from our first attempt was that in order to deploy to many different models using CICD we would have to make considerations regarding our environmental configurations. With this in mind, I have split my Powershell into two — a script that contains the flow for deploying to AAS, and a function which returns configuration information for each model (its data source, its project name, and its target server/database).

First up is our function — we pass in an environmental parameter that will return a list of models and their associated data source dependencies back to the primary script. Should we need to include any further configuration details down the line, this function is simple to extend.

The Powershell script itself is designed to support two different scenarios — instances where developers would want to build the deployment script on their machine, and for the CI/CD process to handle the full deployment of changes. The call to Tabular Editor itself is handled through the use of multiple command line parameters and switches, but what are they actually doing? The arguments between the “Build” and “Deploy” parameter sets are mostly the same, so I’ll run through them and highlight where they differ.

& "$PSScriptRoot\..\..\Dependencies\Tabular Editor\TabularEditor.exe" "$PSScriptRoot\..\..\..\AzureAnalysisServices\$($_.ProjectModelName)\Model.bim"

We call the TabularEditor.exe and pass in the path to our Model.bim. If for whatever reason the Model.bim in your project is named differently across projects, then you would need to add this to the configuration document.

-S "$PSScriptRoot\..\..\Dependencies\ConnectionStringCleanup.cs" `        

-S specifies a script that TabularEditor can use to parameterize deployments. In our case, we point it to a C# script within our Source Control. That script looks as follows:

Note that this is the whole script — it doesn’t require a namespace/class definition to function, as this is all resolved by Tabular Editor. What this script is doing is replacing the data source found in the Model.bim with a standardized placeholder which we subsequently swap out in our -C parameter call. Let’s jump ahead and take a look at that:

-C "Placeholder" "Provider=SQLOLEDB.1;Data Source=$($_.TargetSQLDataSourceServer);Persist Security Info=True;User ID=$($TargetSQLDataSourceUsername);Password=$($TargetSQLDataSourcePassword);Initial Catalog=$($_.TargetSQLDataSourceDatabase)"

As you can see, the SQL DB Server, Database, Username, and Password are all dynamically filled in at run-time. Within our project the Username/Password of the SQL DB instances are environment dependent, hence the values are not specified in the configurations function but instead passed in to the script from Key Vault through Azure Pipelines.

-D "Provider=MSOLAP;Data Source=$($_.TargetASServer);User ID=$($ASServicePrincipal);Password=$($ServicePrincipalApplicationSecret);Persist Security Info=True;Impersonation Level=Impersonate" "$($_.TargetASDatabaseName)"

Going back a line, we establish a similar dynamic connection string for the AAS instance. Once again our Service Principals are environment dependent, so we have these parameters piped in from Azure Pipelines.

-O

A nice and easy one, -O allows for existing models to be overwritten. This will also result in our TMSL output being a CreateOrReplace if we use the “Build” parameter set.

-V

-V is an interesting switch, as it specifies to Tabular Editor that it should output logging commands that can be interpreted by VSTS to generate more verbose logs. You can read more about these logging commands here.

-X "$PSScriptRoot\..\..\Deployment\AnalysisServices\bin\$($_.TargetASDatabaseName).tmsl"

So far we have addressed all the common parameters between the “Build” and “Deploy” processes, but the build specifies an additional -X parameter. This parameter specifies that this should not be deployed to the server listed in the -D parameter, but rather that the underlying XMLA (or in our case, TMSL) that would be deployed is instead written out.

There are plenty more parameters and switches beyond what I have used them for (outputting the updated .bim only, removing roles from deployment, etc.), and the full list can be reviewed in the Tabular Editor Wiki found here.

How does this look in Azure Pipelines?

From a usage perspective we can run both of our approaches through a simple Azure Powershell task on the Release pipeline. Note that while we only require the Model.bim in order to deploy, we should still have the models build using an MSBuild task in our Build Pipeline to ensure the integrity of the model.

Parameters are passed into the task via the pipelines linked variable group.

Our variables are being passed in from Variable Groups tied to our particular environmental release pipeline. For sensitive configurations such as the Service Principal and SQL Authentication details we don’t explicitly write the value against the variable but rather as a reference to an associated Key Vault value. You can read more about how to set this up here.

We link a variable group (in this case “Dev”) to our release pipeline so that it can import pre-defined variables.

Wrap Up

Hopefully this write up has given you an insight into how we can manage automated deployments of Tabular Models for CI/CD. We can either do this totally through Powershell scripting, or leverage the power of 3rd party tools such as Tabular Editor and manage the passing in of configurations ourselves.

An area I didn’t delve into (but is nonetheless an incredibly powerful way of managing all aspects of Tabular Model administration let alone deployment) is using C# to interact/deploy via the AMO-TOM library. There is plenty of documentation from Microsoft that you can delve into right here!

Good luck on your automated adventure!

--

--

The Startup
The Startup

Published in The Startup

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +772K followers.

Liam Dunphy
Liam Dunphy

Written by Liam Dunphy

Lead DevOps Engineer working at Redkite