Automating Istio Installation on AKS with Terraform and Securing Traffic with Azure Front Door

Saverio Proto
Microsoft Azure
Published in
4 min readMar 23, 2023

I recently published on GitHub some samples on how to install Istio on AKS to integrate the installation with the other Azure products.

If you are new to Istio you can try my istio-on-aks tutorial. Instead if you are an Istio expert but you are new to Azure, you can try the Azure Sample for a multi-cluster installation on AKS. Going through these samples should accelerate your Istio experience on AKS.

I will discuss in this article my latest example istio-on-aks-with-front-door, it is about using the Azure Front Door CDN to expose the Istio ingress gateway. This way you can benefit from the CDN and WAF features of Front Door to secure the traffic for your service mesh.

The architecture looks like this:

Architecture Diagram. ( credit to Paolo Salvatori for the diagram )

The AKS deployment and the Istio installation are automated with Terraform.

The Terraform code is organised in 2 distinct projects in the folders aks-tf and istio-tf. This means you have to perform 2 terraform apply operations like it is explained in the Terraform documentation of the Kubernetes provider in the section “Stacking with managed Kubernetes cluster resouces”.

The AKS cluster is provisioned with 2 additional nodepools. A user nodepool for the workloads, and an ingress nodepool dedicated to the istio ingress gateway deployment. The AKS baseline architecture suggests to deploy ingress resources controlling on which nodepool those pods run, to avoid noisy neighbour problems. A dedicated ingress nodepool makes also possible to have an autoscaling logic (or a disabled autoscaling) that is different from the autoscaling logic of the workloads running on the cluster.

The Istio installation is automated with the Helm Terraform provider leveraging the official Istio charts. I have tuned the configuration of the charts in the following way:

I schedule the istiod deployment to the system nodepool. Because the helm chart does not support this configuration I rely on the postrender call to run kustomize and patch the output yaml. The same approach is used to schedule the istio-ingress deployment to the ingress nodepool.

The helm chart default installation for the istio ingress gateway will create a Kubernetes Service of type LoadBalancer, to expose with a public IP the gateway. I don’t want the istio-ingress gateway to be accessible from the Internet, I am passing the annotation "service.beta.kubernetes.io/azure-load-balancer-internal=true" to create instead an internal load balancer.

The kubernetes cloud provider controller azure supports natively Private Link Service. This means that when I create a kubernetes service of type LoadBalancer, if I add the necessary annotations to the service, the cloud provider controller in the cluster will create the private link service I need. In the istio.tf file you see the list of annotations used in the sample.

Because the private link service is created by azure cloud controller manager using the configuration in the service annotations, Terraform is not aware of this resource. I defined a data source azurerm_private_link_service to be able to reference the private link service in Terraform.

When I create the Azure Front Door origin, I specify that I want a private endpoint adding the private_link block in the configuration of the azurerm_cdn_frontdoor_origin resource. This block uses data from the data source azurerm_private_link_service. I have the feeling not a lot of people are automating this configurations with Terraform, because the documentation of the azurerm_cdn_frontdoor_origin resource had an incomplete example that did not work, but now I fixed the documentation.

The infrastructure is now deployed end to end, but it is necessary to create the necessary Gateway and VirtualService resources for the istio ingress gateway to accept traffic. Before Azure Front Door is actually able to deliver traffic to the istio ingress gateway we must make sure the Health Check probes are successful. Our Terraform deployed the origin group with the following health probe:

  health_probe {
path = "/probe"
request_type = "GET"
protocol = "Http"
interval_in_seconds = 100
}

Envoy has a health check probe service on port 15021, so I create a virtual service to forward the Azure Front Door Health Check probes to the envoy port 15021.

---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: healthcheck
namespace: istio-ingress
spec:
hosts:
- "*"
gateways:
- istio-ingress/istio-ingressgateway
http:
- match:
- uri:
prefix: "/probe"
rewrite:
uri: "/healthz/ready"
route:
- destination:
host: "istio-ingress.istio-ingress.svc.cluster.local"
port:
number: 15021

Following step by step the README you can apply the Terraform code and create the required Istio Gateway and VirtualServices to test the sample.

This was a lot of information in a short article, you probably want to read also the documentation I used to create all this:

I hope this article and the GitHub repository are going to be useful for folks who want to automate their Istio installation on Azure. I appreciate feedback, please open GitHub issues or propose PRs to improve my work. Thank you !

--

--

Saverio Proto
Microsoft Azure

Customer Experience Engineer @ Microsoft - Opinions and observations expressed in this blog posts are my own.