How to Automate Deployment of Esri ArcGIS Enterprise to Azure Kubernetes Service

Laura Jaques
OS TechBlog
Published in
6 min readSep 27, 2021
Image by digital designer from Pixabay

Esri launched ArcGIS Enterprise on Kubernetes in May of 2021, and we’ve been trialling automated deployment to Azure Kubernetes Service (AKS) using a YAML pipeline in Azure DevOps.

There is a bit of setup to do in advance, and we did encounter a few speed bumps, so we wanted to share our experience to help you set up your own automated deployment.

To follow these instructions, you need access to Esri’s Kubernetes deployment and support scripts. To get hold of these, talk to your Esri representative.

The scripts promise a ‘streamlined deployment experience’ and, for the most part, that promise has been fulfilled. They do a lot of the heavy lifting, pulling the container images from a private Docker Hub registry and deploying them to the cluster.

The logs are also super detailed, keeping you updated at every step of the process, and, if there’s a problem, everything rolls back automatically, leaving you with a clean cluster for a retry.

Step 1 — Prepare your build agent

Esri’s deploy shell script begins with a validation step that checks your system has everything it needs.

This includes support scripts and YAML files from Esri (we added these to our repo), kubectl, openssl, and the Docker cli.

Openssl is pre-installed on Microsoft-hosted build agents, but you’ll need to install kubectl and Docker yourself. There are pipeline tasks available for both: KubernetesInstaller and DockerInstaller.

We ran into some problems running the shell script on a Windows agent (bad substitution errors when it tried to read the values from the properties files), so we chose ubuntu-latest as our VM image.

We needed to give the agent permission to execute the support scripts in the folder structure provided by Esri, which we did using chmod in a bash task.

Step 2 — Prepare your config

When you run Esri’s deployment script locally and without parameters, it uses command prompts to collect the necessary config details (like your Docker credentials and the paths to your TLS certificate files). It then saves everything you provide to a properties file in your current working directory.

When you run deploy.sh again in the future, you can just supply the properties file, bypassing the command prompts.

You can also edit the properties file directly (there’s a template called deploy.properties in the same directory as the script).

We kept most of the defaults and set ours up as follows:

INGRESS_TYPE=”LoadBalancer”LOAD_BALANCER_TYPE=”azure-external”LOAD_BALANCER_IP=””K8S_NAMESPACE=”arcgis”CONTAINER_REGISTRY_USERNAME=”__DockerUsername__”CONTAINER_REGISTRY_PASSWORD=”__DockerPassword__”ARCGIS_ENTERPRISE_FQDN=”yourrecord.yourdnszone.com”INGRESS_SERVER_TLS_PFX_FILE=”__PfxFilePath__”INGRESS_SERVER_TLS_PFX_PSSWD=”__PfxPassword__”

As you can see, several of the properties are tokens, surrounded by double underscores. These are secret values, so we stored them in an Azure Key Vault.

We retrieved the values in our pipeline using an Azure Key Vault task and replaced the tokens using a Colin’s ALM Corner Replace Tokens task. This task looks for pipeline variables and exchanges them for tokens matching that name in the target file that you specify.

For example, if VariableName is available as a pipeline variable, the task will substitute it for __VariableName__ in your properties file.

It’s important to note here that the shell script cannot consume TLS certificate files directly. It wants file paths. So you need to save the file(s) to the agent.

We followed the instructions provided by Microsoft to save an encrypted version of our PFX certificate into the working directory of the agent.

Then, we made both the file path and our encryption password available as output variables so that the token task could put them into the properties file.

Write-Host("##vso[task.setvariable variable=PfxFilePath;]$pfxFilePath")

Step 3 — Deploy your cluster

Once you’ve done your setup, you’re ready to start your deployment.

Esri provides recommended compute requirements for each of its three architecture profiles (standard availability, enhanced availability, and development). We chose development: two nodes, each with eight vCPUs and 32GB RAM (at the time of writing, the best match for us in Azure was Standard D8s v3).

We used an ARM template to deploy our AKS cluster, there’s a nice example here.

Step 4 — Configure your cluster

There are three bits of cluster setup to do before you run the shell script. These are all detailed in the guidance from Esri, but it took us a while to find them.

  1. Create an arcgis namespace for the script to deploy into. You can do this with a kubectl task and the following command:
create namespace arcgis

2. Grant service accounts in the kube-system namespace admin permissions (the minimum permission needed to deploy ArcGIS Enterprise). Add another kubectl task with the command:

create clusterrolebinding add-on-admin --clusterrole=admin --serviceaccount=kube-system:default

3. Create a storage class to allow dynamic provisioning of storage (the default name for this class is arcgis-storage-default). Use the example storage class yaml file from Esri (see below), and create the class with a kubectl task that executes the command:

apply -f storageclass.yaml

Here’s the default storage class config:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: arcgis-storage-default
provisioner: kubernetes.io/azure-disk
parameters:
kind: Managed
storageaccounttype: Premium_LRS
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer

Step 5 — Deploy ArcGIS Enterprise

It’s almost time to run the deploy.sh script. First, you’ll need to sign your build agent into your cluster using an Azure CLI task and the following command:

az aks get-credentials — resource-group $(resourceGroupName) — name $(kubernetesClusterName)

Then, use a Shell Script task to run the deploy.sh script, passing your tokenised deploy.properties file as a parameter.

./deploy.sh -f deploy.properties

Step 6 — Access ArcGIS Enterprise

After a successful deployment, there’s one final step before you can access ArcGIS Enterprise on your Kubernetes cluster.

Remember the FQDN you provided in your deployment.properties file? You need to create or update your DNS record to link it to your cluster.

We used the external load balancer option for our deployment, which exposes an external IP address on an NGINX Ingress Controller.

The following command in an Azure CLI task can retrieve that IP address:

$externalIp = (kubectl get service ‘arcgis-ingress-nginx’ — namespace ‘arcgis’ — output jsonpath=’{.status.loadBalancer.ingress[0].ip}’)

And this one will make a new DNS record using Azure’s Public DNS service (there are lots of other ways to do this).

az network dns record-set a add-record — resource-group $(resourceGroupName) — zone-name $(dnsZoneName) — record-set-name $(recordSetName) — ipv4-address $(externalIp)

Step 7 — Configure ArcGIS Enterprise

Now, you can visit your deployed ArcGIS Enterprise instance using the links outputted by the deploy script. You can either run through the configuration wizard there, or you can automate the setup with another shell script from Esri.

The configure.sh script works in a very similar way to the deploy.sh script, with a configure.properties file.

We tokenised the secret values in our file and swapped them out in the pipeline as before.

SYSTEM_ARCH_PROFILE=”development”
K8S_NAMESPACE=”arcgis”
ARCGIS_ENTERPRISE_FQDN=”yourrecord.yourdnszone.com“
LICENSE_FILE_PORTAL=”__PortalLicenseFilePath__”
LICENSE_FILE_SERVER=”__ServerLicenseFIlePath__”
ADMIN_USERNAME=”__AdminUsername”
ADMIN_PASSWORD=”__AdminPassword__”
ADMIN_EMAIL=”__AdminEmail__”
ADMIN_FIRST_NAME=”__AdminFirstName__”
ADMIN_LAST_NAME=”__AdminLastName__”
SECURITY_QUESTION_INDEX=1
SECURITY_QUESTION_ANSWER=”__SecurityAnswer__”

The licenses for ArcGIS Enterprise are multiline strings, so we uploaded them as secrets to an Azure Key Vault using the following command (from a local machine):

az keyvault secret set — name $(licenseName) — vault-name $(keyvaultName) — file $(licenseFilePath)

Then, we downloaded the licenses in our pipeline and saved them to the working directory on the build agent.

You can do that using an Azure CLI task:

az keyvault secret download — name $(licenseName) — vault-name $(keyvaultName) — file $(licenseFilePath)

The final step is to run the configure.sh script, passing your tokenised configure.properties file as a parameter.

If you scroll down to the bottom of the config instructions from Esri, you’ll notice that the configure script requires user input.

Once you’ve run the script, you’ll be presented with a summary of configuration details and asked if you wish to continue.

This means that a shell script task in a pipeline will timeout waiting for a yes or no from the build agent once it’s checked the config properties, unless you add -s to tell it to run silently.

./configure.sh -s -f configure.properties

And that’s it!

All being well, the entire automated deployment and configuration process only takes about 40 minutes. If you run into any problems, let us know and we’ll be happy to share more about what we’ve learned.

--

--