Cert-Manager and Istio: Choosing Ingress Options for the Istio-based service mesh add-on for AKS

Saverio Proto
Microsoft Azure
Published in
11 min readJan 23, 2024

How should I expose my Istio service mesh to handle north-south traffic? There isn’t a one-size-fits-all approach to this. In this article, I will summarize the available options.

The article assumes you have some experience with Istio and cert-manager. I won’t be covering the basics of getting started with Istio and cert-manager; instead, I’ll focus on how these two tools work together. Specifically, I’ll only be looking at HTTP01 challenges for cert-manager, leaving the DNS01 challenge provider for future exploration.

Generated with AI: Whiteboard with a complex networking schema including Kubernetes container clusters and traffic flows

Create an AKS cluster with Istio pre-installed

This is how I arranged an AKS cluster to experiment with configurations for this article. We will install Istio using the AKS add-on. At the time of writing this article, the add-on installs Istio version 1.17.

#!/bin/bash
az group create --name azureservicemesh --location eastus

az aks create \
--location eastus \
--name azureservicemesh \
--resource-group azureservicemesh \
--network-plugin azure \
--kubernetes-version 1.28 \
--node-vm-size Standard_DS3_v2 \
--node-count 2 \
--auto-upgrade-channel rapid \
--node-os-upgrade-channel NodeImage \
--enable-asm

az aks get-credentials \
--resource-group azureservicemesh \
--name azureservicemesh \
--overwrite-existing

Now, let’s deploy an application that we’ll make accessible through three distinct ingress options:

kubectl apply -f - <<EOF
---
apiVersion: v1
kind: Service
metadata:
name: echoserver
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: echoserver
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echoserver
spec:
replicas: 1
selector:
matchLabels:
run: echoserver
template:
metadata:
labels:
run: echoserver
spec:
containers:
- name: echoserver
image: gcr.io/google_containers/echoserver:1.10
ports:
- containerPort: 8080
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 6
periodSeconds: 10
resources:
requests:
memory: "40Mi"
cpu: "20m"
EOF

The Ingress options

There are three APIs that help expose an HTTP endpoint externally:

  • Kubernetes Ingress API:
    The Kubernetes Ingress API, included in networking.k8s.io/v1, is a familiar Ingress resource in Kubernetes. Istio’s control plane can keep an eye on resources from this API and set up Ingress configurations. This Istio feature isn’t widely known but is explained here. Internally it works implementing a conversion mechanism to convert the Ingress spec into Istio Gateway and a Virtual Service.
  • Classic Istio Ingress Gateway:
    The term “classic” means using the gateway from the networking.istio.io/v1beta1 API to configure istio-ingressgateway pods. The istio-ingressgateway Deployment is installed with a specific Helm chart, creating a shared setup for Istio Gateway and VirtualService configurations.
  • Kubernetes Gateway API:
    This option uses the newest gateway.networking.k8s.io/v1 Kubernetes API. It not only describes Ingress configurations using new Gateway and HTTPRoute resources but also creates a dedicated Envoy gateway Pod for each Gateway resource. This is a significant difference from the Classic Istio Ingress Gateway, where istio-ingressgateway pods were shared infrastructure.

If you install both Istio and the Kubernetes Gateway API in the cluster, there’s a naming ambiguity with the term “Gateway.” The command kubectl get gateway might return either a gateways.gateway.networking.k8s.io or a gateways.networking.istio.io object. To steer clear of confusion, it’s advisable to use the short versions gw for the Istio Gateway and gtw for the Kubernetes gateway. You can confirm this information by using the commands kubectl api-resources and kubectl get crd.

The Kubernetes Ingress API

This scenario is deprecated, and I strongly advise against its use. However, if you have compelling reasons to utilize the networking.k8s.io/v1 Ingress resource, please continue reading this section.

To begin, it is essential to create an IngressClass:

kubectl apply -f - <<EOF
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: istio
spec:
controller: istio.io/ingress-controller
EOF

The Istio control plane monitors ingresses associated with this specific IngressClass. This implementation works by interpreting the Ingress object, transforming it into the required Gateway and VirtualServices objects. These objects are then linked with a selector to the Classic Istio Ingress Gateway deployment. The MeshConfig parameters ingressService, ingressClass, ingressControllerMode, and ingressSelector allow for some customization of this behavior.

This implementation has has several issues and limitations:

  • The istio-ingressgateway deployment needs to be in the fixed istio-system namespace, as mentioned in this code comment and in the GitHub issue 46971. As a result, when utilizing the Istio AKS add-on, you are unable to use the Microsoft-managed Istio ingress gateway since those are located in the aks-istio-ingress namespace.
    *** Update from September 2024, the product team of the Istio AKS add-on patched the code to remove the limitation about the istio-system namespace. The PR 4511 shows an example supported by the product team. ***
  • If the Ingress has a TLS configuration, the kubernetes secret holding the certificate must be in the istio-system namespace.

Given the limitations mentioned earlier, to test this scenario, you need to install the istio/gateway helm chart in the istio-system namespace. It’s important to set the revision asm-1–17 to ensure that the Gateway Pods are correctly injected by the existing Istio control plane. I also include the Service annotation service.beta.kubernetes.io/azure-dns-label-name, which gives an automatic DNS name for testing TLS certificates. Please change the string my-istio-ingress to something unique for your deployment.

kubectl create namespace istio-system
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
helm install istio-ingressgateway istio/gateway \
--set revision=asm-1-17 \
--set service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=my-istio-ingress \
-n istio-system --wait

Installing this Helm chart differs from the documentation’s approach, which involves setting up a Microsoft-managed external Istio ingress gateway using the command az aks mesh enable-ingress-gateway. This gateway is created in the aks-istio-ingress namespace that does not satisfy our requirement of using the istio-system namespace.

You can create a simple ingress to test the functionality for HTTP traffic on port 80.

kubectl apply -f - <<EOF
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echoserver
spec:
ingressClassName: istio
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echoserver
port:
number: 8080
EOF

You can test this Ingress with curl at this address:

curl -v http://my-istio-ingress.eastus.cloudapp.azure.com

There’s a Kubernetes service that exposes the istio-ingressgateway Pods, and it comes with an external IP address. The annotation service.beta.kubernetes.io/azure-dns-label-name, which we applied to the Service through the Helm chart parameter, configures the DNS record using the Service external IP address.

Now let’s add the TLS certificates. I will install cert-manager.

helm repo add jetstack https://charts.jetstack.io
helm repo update
helm upgrade cert-manager jetstack/cert-manager \
--install \
--create-namespace \
--wait \
--namespace cert-manager \
--set installCRDs=true

The following yaml file is what you need to create an Ingress with a TLS endpoint. Add to the ClusterIssuer a valid email. We are using the HTTP01 challenge, and the ClusterIssuer is configured to use the istio IngressClass to create the temporary ingress needed to expose the ACME challenges routes.

We need to create the Certificate object manually, because we need to force it to the istio-system namespace. This way the Kubernete secret holding the TLS certificate will also be created in the istio-system namespace.

At the end, we have our Ingress definition:

# IMPORTANT: Replace example.com with a valid email
kubectl apply -f - <<EOF
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: example@example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod-issuer-account-key
solvers:
- http01:
ingress:
class: istio
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: echoserver-tls-secret
namespace: istio-system
spec:
dnsNames:
- my-istio-ingress.eastus.cloudapp.azure.com
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: letsencrypt-prod
secretName: echoserver-tls-secret
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echoserver
spec:
ingressClassName: istio
tls:
- hosts:
- my-istio-ingress.eastus.cloudapp.azure.com
secretName: echoserver-tls-secret
rules:
- host: my-istio-ingress.eastus.cloudapp.azure.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echoserver
port:
number: 8080
EOF

If you are familiar with the cert-manager ingress annotations you will notice that the following annotation is missing:

  annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod

This is because we need to create the Certificate resource manually to force the istio-system namespace, so that the Kubernetes secret will be also created in the istio-system namespace and will be available for the istio-ingressgateway Pod. When using the cert-manager.io/cluster-issuer annotation the Certificate object would be created automatically, but in the same namespace as the ingress object.

You can test that everything is working checking the Certificate the Secret and hitting the HTTPS endpoint:

kubectl get certificate -n istio-system # READY should be True
kubectl get secret -n istio-system echoserver-tls-secret
curl -v https://my-istio-ingress.eastus.cloudapp.azure.com

Please clean up these resources before moving to the next step of the tutorial:

kubectl delete ingress echoserver
kubectl delete certificate echoserver-tls-secret -n istio-system
kubectl delete secret echoserver-tls-secret -n istio-system

Classic Istio Ingress Gateway

This is the most stable Ingress option in Istio, but it lacks proper cert-manager automation.

*** Update from September 2024: the AKS Istio add-on product team supports cert-manager automation with Classic Istio Ingress Gateway leveraging the Kubernetes Ingress API to issue the certificates like proposed in this blog post. Refer to the example merged in PR4511 for the official guidance. ***

When using the Istio-based service mesh add-on for AKS, if you want the istio-ingressgateway Deployment to be managed by Microsoft, you need to enable it. The following command will deploy the Classic Istio Ingress Gateway in the aks-istio-ingress namespace:

# IMPORTANT: Do not execute this command.
# It is included here for explanatory purposes only.
# This feature is not utilized in the tutorial.
az aks mesh enable-ingress-gateway \
--resource-group azureservicemesh
--name azureservicemesh
--ingress-gateway-type external

To make our echoserver workload accessible on port 80, simply create a Gateway and a VirtualService, as I previously explained in my earlier article.

# IMPORTANT: Do not create these resources.
# It is included here for explanatory purposes only.
# This feature is not utilized in the tutorial.
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: istio-ingressgateway
namespace: aks-istio-ingress
spec:
selector:
istio: aks-istio-ingressgateway-external
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: echoserver
namespace: default
spec:
hosts:
- "*"
gateways:
- aks-istio-ingress/istio-ingressgateway
http:
- match:
- uri:
prefix: "/"
route:
- destination:
host: "echoserver.default.svc.cluster.local"
port:
number: 8080

At this point we are blocked because there is no way to install TLS certificates with cert-manager. The challenge we face is a limitation in cert-manager. The problem lies in cert-manager lacking an ACMEChallengeSolverHTTP01 that is compatible with classic Istio Ingress Gateways. The HTTP-01 challenge requires cert-manager to publish a token at the URL http://<YOUR_DOMAIN>/.well-known/acme-challenge/<TOKEN>. This means that cert-manager automatically creates a temporary ingress to publish the token URL, but the current implementation supports only the Kubernetes Ingress API and the Kubernetes Gateway API.

I found a solution to this issue that uses a mix of Kubernetes Ingress and Classic Istio Ingress Gateways.

  1. Install the IngressClass. The Kubernetes Ingress API will be used by cert-manager to create temporary Ingress resources to publish the ACME challenge tokens.
  2. Create the Certificate resource in the istio-system namespace
  3. Create the Gateway resource in the istio-system namespace referencing the Kubernetes secret generated by the Certificate resource.

This solution works as long as both Kubernetes Ingress API and Classic Istio Ingress Gateways API are working, and it requires a custom installation in the istio-system namespace as in the previous step.

Here is the complete yaml file:

# IMPORTANT: Replace example.com with a valid email
kubectl apply -f - <<EOF
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: istio
spec:
controller: istio.io/ingress-controller

---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: example@example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod-issuer-account-key
solvers:
- http01:
ingress:
class: istio
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: echoserver-tls-secret
namespace: istio-system
spec:
dnsNames:
- my-istio-ingress.eastus.cloudapp.azure.com
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: letsencrypt-prod
secretName: echoserver-tls-secret
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: echoserver-tls
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: echoserver-tls-secret
hosts:
- 'my-istio-ingress.eastus.cloudapp.azure.com'
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: echoserver-tls
namespace: default
spec:
hosts:
- "my-istio-ingress.eastus.cloudapp.azure.com"
gateways:
- istio-system/echoserver-tls
http:
- match:
- uri:
prefix: "/"
route:
- destination:
host: "echoserver.default.svc.cluster.local"
port:
number: 8080
EOF

Again, let’s test if everything was created correctly:

kubectl get certificate -n istio-system # READY should be True
kubectl get secret -n istio-system echoserver-tls-secret
curl -v https://my-istio-ingress.eastus.cloudapp.azure.com

Keep in mind that we’ve set up a Gateway exclusively for port 443. Therefore, only the HTTPS protocol is supported, and any attempts to connect to port 80 will result in a connection timeout.

Please clean up these resources before moving to the next step of the tutorial:

kubectl delete gateway echoserver-tls -n istio-system
kubectl delete virtualservice echoserver-tls
kubectl delete certificate echoserver-tls-secret -n istio-system
kubectl delete secret echoserver-tls-secret -n istio-system
kubectl delete ingressclass istio

Kubernetes Gateway API

If you’re new to the Gateway API, I recommend starting with the introductory documentation. Also, take a look at my previous Istio article where I explain how to use it on AKS.

The Kubernetes Gateway API is not installed by default in AKS. However, the installation process is straightforward, it only involves adding the Custom Resource Definitions:

kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml

You also need to install cert-manager with the Experimental Gateway API support feature gate enabled:

helm repo add jetstack https://charts.jetstack.io
helm repo update
helm upgrade cert-manager jetstack/cert-manager \
--install \
--create-namespace \
--wait \
--namespace cert-manager \
--set installCRDs=true \
--set "extraArgs={--feature-gates=ExperimentalGatewayAPISupport=true}"

Now, let’s generate the required resources. The Kubernetes Gateway resource supports the cert-manager.io/issuer annotation, eliminating the need to create a Certificate resource. The Issuer resource incorporates a gatewayHTTProute solver, configured to generate an HTTProute in the referenced Gateway. This exposes the ACME HTTP01 challenge token.

# IMPORTANT: Replace email@example.com with a valid email
kubectl apply -f - <<EOF
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: echoserver
labels:
istio.io/rev: asm-1-17
annotations:
service.beta.kubernetes.io/azure-dns-label-name: "my-istio-gateway"
service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: "/healthz/ready"
service.beta.kubernetes.io/port_80_health-probe_protocol: http
service.beta.kubernetes.io/port_80_health-probe_port: "15021"
service.beta.kubernetes.io/port_443_health-probe_protocol: http
service.beta.kubernetes.io/port_443_health-probe_port: "15021"
cert-manager.io/issuer: letsencrypt
spec:
gatewayClassName: istio
listeners:
- name: http
hostname: my-istio-gateway.eastus.cloudapp.azure.com
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
- hostname: my-istio-gateway.eastus.cloudapp.azure.com
name: https
port: 443
protocol: HTTPS
allowedRoutes:
namespaces:
from: All
tls:
mode: Terminate
certificateRefs:
- name: echoserver-tls
kind: Secret

---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt
spec:
acme:
email: email@example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: issuer-account-key
solvers:
- http01:
gatewayHTTPRoute:
parentRefs:
- name: echoserver
kind: Gateway
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: echoserver
spec:
parentRefs:
- name: echoserver
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: echoserver
port: 8080
EOF

This Gateway resource will initiate an echoserver-istio Kubernetes Deployment and a Kubernetes Service of type LoadBalancer. The Gateway’s Labels and Annotations are transmitted to the Service object, allowing you to configure Azure Service annotations for customized health checks as needed.

You can check both the http and https endpoints using curl. Make sure to use the hostname because the Gateway will only bind to the specific hostname.

# IMPORTANT: replace my-istio-gateway with your annotation azure-dns-label-name
kubectl get deployment
kubectl get service
kubectl get certificate echoserver-tls # READY should be True
kubectl get secret echoserver-tls
curl -v http://my-istio-gateway.eastus.cloudapp.azure.com
curl -v https://my-istio-gateway.eastus.cloudapp.azure.com

Conclusion

In summary, if you want to make your Istio service accessible for outside traffic, you have different choices. After looking into it, it’s clear that using the Kubernetes Gateway API is the best way to set up incoming traffic.

Although there isn’t a single solution that works for everyone, the Kubernetes Gateway API is my favourite option. It makes managing traffic easier and works well with Istio. It’s important to note that this API is expected to be the standard choice in Istio for handling incoming traffic in the future.

--

--

Saverio Proto
Microsoft Azure

Customer Experience Engineer @ Microsoft - Opinions and observations expressed in this blog posts are my own.