Multi-Cluster/Shared Application Gateway Ingress Controller for Azure Kubernetes Service

Alfredy Toro
Globant
Published in
6 min readMar 27, 2023

We know that Azure’s Application Gateway Ingress Controller (AGIC) is the ingress controller to use by default in AKS, since, internally, it has a plugin that allows us to use Azure’s native Application Gateway L7 load balancer to expose our services and thus continue using functionalities within the Azure world and not resort to external solutions like Nginx or Traefik.

Besides implementing AGIC as a plugin, there is another method to install it using Helm charts. There are soBoth methods have some differencesin difference is that with Helm, we can have more control over how we implement our ingress controller and make our Application Gateway go from being a simple controller to one that can expose other types of existing backends, like web applications.

Prerequisites

For you to follow these steps successfully, you will need the following:

Multi-cluster/Shared Application Gateway Installation

We will use Azure’s CLI az command to create all the resources needed to deploy the Multi-cluster/Shared App Gateway. If you already have an Azure cluster with an Application Gateway available, you can skip this section and go directly to Managed identities configuration.

AKS Cluster Deployment

First, we need to create the Kubernetes cluster. Define the shell variables needed to create all resources:

$ RESOURCE_GROUP="aks-agichelm-demo"
$ LOCATION="eastus"
$ AKS_NAME="aks-agichelm-demo"
$ VNET_NAME="aks-vnet"
$ AKS_SUBNET="aks-subnet"

Create the Resource group:

$ az group create - name $RESOURCE_GROUP - location $LOCATION
$ RESOURCE_GROUP_ID=$(az group show - name $RESOURCE_GROUP - query id -o tsv)

Create the Virtual Network — Subnet for the AKS:

$ az network vnet create \
- resource-group $RESOURCE_GROUP \
- name $VNET_NAME \
- address-prefixes 10.0.0.0/8 \
- subnet-name $AKS_SUBNET \
- subnet-prefix 10.10.0.0/16

Get the VNET ID:

$ SUBNET_ID=$(az network vnet subnet show -g $RESOURCE_GROUP -n $AKS_SUBNET - vnet-name $VNET_NAME - query id -o tsv)

Create the User Identity — AKS:

$ az identity create - name $AKS_NAME-identity - resource-group $RESOURCE_GROUP

Get the Managed Identity ID:

$ IDENTITY_ID=$(az identity show - name $AKS_NAME-identity - resource-group $RESOURCE_GROUP - query id -o tsv)
$ IDENTITY_CLIENT_ID=$(az identity show - name $AKS_NAME-identity - resource-group $RESOURCE_GROUP - query clientId -o tsv)

Create the AKS cluster:

$ az aks create \
- resource-group $RESOURCE_GROUP \
- name $AKS_NAME \
- node-vm-size Standard_B4ms \
- network-plugin azure \
- vnet-subnet-id $SUBNET_ID \
- docker-bridge-address 172.17.0.1/16 \
- dns-service-ip 10.2.0.10 \
- service-cidr 10.2.0.0/24 \
- enable-managed-identity \
- assign-identity $IDENTITY_ID

Managed Identities

We assign the roles to the internal managed identity of the cluster (a kubelet managed identity), so that later the installation of the Azure AD Pod Identity is successful. Now, we have to create the managed identities and roles allocations:

Get the AKS Cluster with Managed Identity:

$ KUBELET_CLIENT_ID=$(az aks show -g $RESOURCE_GROUP -n $AKS_NAME - query identityProfile.kubeletidentity.clientId -o tsv)

Get the Node Resource Group name:

$ NODE_RESOURCE_GROUP=$(az aks show - resource-group $RESOURCE_GROUP - name $AKS_NAME - query nodeResourceGroup -o tsv)
$ NODE_RESOURCE_GROUP_ID=$(az group show - name $NODE_RESOURCE_GROUP - query id -o tsv)

Perform role assignments:

$ az role assignment create - role "Managed Identity Operator" - assignee $KUBELET_CLIENT_ID - scope $NODE_RESOURCE_GROUP_ID
$ az role assignment create - role "Virtual Machine Contributor" - assignee $KUBELET_CLIENT_ID - scope $NODE_RESOURCE_GROUP_ID
# User-assigned identities that are not within the node resource group
$ az role assignment create - role "Managed Identity Operator" - assignee $KUBELET_CLIENT_ID - scope $RESOURCE_GROUP_ID
$ az role assignment list - assignee $KUBELET_CLIENT_ID -o table - all

Application Gateway Deployment

We will create the Application Gateway and the other involved resource, such as the subnet, public IP address, etc. The following code explains how to create these resources:

Define variables:

$ APPGW_SUBNET="appgw-subnet"
$ APP_GW_NAME="app-gw"

Create the subnet for the Application Gateway:

$ az network vnet subnet create \
- resource-group $RESOURCE_GROUP \
- vnet-name $VNET_NAME \
- name $APPGW_SUBNET \
- address-prefixes 10.20.0.0/16

Create the public IP address:

$ az network public-ip create \
- resource-group $RESOURCE_GROUP \
- name $APP_GW_NAME-public-ip \
- allocation-method Static \
- sku Standard

Create the Application Gateway:

$ az network application-gateway create \
- resource-group $RESOURCE_GROUP \
- name $APP_GW_NAME \
- location $LOCATION \
- vnet-name $VNET_NAME \
- subnet $APPGW_SUBNET \
- public-ip-address $APP_GW_NAME-public-ip \
- sku WAF_v2 \
- capacity 1

Azure AD Pod Identity and AGIC with Helm

Now can we interact directly in the cluster to install components, such as Azure AD Pod Identity and AGIC, using Helm. The following code explains how to install and configure Azure AD Pod Identity and customize the helm chart:

Obtain AKS credentials:

$ az aks get-credentials - resource-group $RESOURCE_GROUP - name $AKS_NAME

For Azure AD Pod Identity is mandatory to install and configure AGIC:

$ helm repo add aad-pod-identity https://raw.githubusercontent.com/Azure/aad-pod-identity/master/charts
$ helm install aad-pod-identity aad-pod-identity/aad-pod-identity -n kube-system

Check Azure AD Pod identity pods:

$ kubectl get pods -n kube-system -l app.kubernetes.io/name=aad-pod-identity

For the Azure Portal, create an identity for the AGIC controller:

$ az identity create -g $RESOURCE_GROUP -n $APP_GW_NAME-identity

Get the client and the identity ids:

$ APP_GW_IDENTITY_CLIENT_ID=$(az identity show -g $RESOURCE_GROUP -n $APP_GW_NAME-identity -o tsv - query "clientId")
$ APP_GW_IDENTITY_RESOURCE_ID=$(az identity show -g $RESOURCE_GROUP -n $APP_GW_NAME-identity -o tsv - query "id")
$ APP_GW_ID=$(az network application-gateway show -g $RESOURCE_GROUP -n $APP_GW_NAME - query id -o tsv)

Assign the Contributor role to the identity over the Application Gateway:

$ az role assignment create \
- role "Contributor" \
- assignee $APP_GW_IDENTITY_CLIENT_ID \
- scope $APP_GW_ID
$ az role assignment create \
- role "Reader" \
- assignee $APP_GW_IDENTITY_CLIENT_ID \
- scope $RESOURCE_GROUP_ID

In Azure Cloud Shell, set up the AGIC Helm repository:

$ helm repo add application-gateway-kubernetes-ingress https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/
$ helm repo update

Now we have to edit the helm-config.yaml file (you can use vim or nano editors):

Helm config file

In that file, we are creating a specific Ingress called azure-application-gateway-dev for only two specific namespaces. We also have two authentication options; we will use the managed identity we created in the previous lines (APP_GW_IDENTITY_CLIENT_ID, APP_GW_IDENTITY_RESOURCE_ID).

Before running the helm chart installation, the dev-1 and dev-2 namespaces must be previously created:

$ kubectl create namespace dev-1
$ kubectl create namespace dev-2

Install the Helm chart:

$ helm install agic-dev -f helm-config.yaml application-gateway-kubernetes-ingress/ingress-azure -n kube-system

Check that the Ingress is in a “Running” state with this command:

$ kubectl get pods -n kube-system -l release=agic-dev
NAME READY STATUS RESTARTS AGE
agic-dev-ingress-azure-55f75fd8d7-l5r8j 1/1 Running 0 167d

Now the AGIC is ready to receive Ingress configurations in the two previous namespaces. Apply your deployments and services using an Ingress template similar to this one:

Ingress manifest file

Multi-cluster / Shared App Gateway

By default, AGIC assumes full ownership and control of the Application Gateway it is linked to. But we can manipulate this behavior to use the same Application Gateway to expose an application hosted in a virtual machine or App Service together with an AKS cluster.

This can be achieved by configuring an AzureIngressProhibitedTarget object, as we can add other backends with their specific hostname and all the settings (backendpools, listeners, https-settings) related to that hostname will be kept along with the services exposed by AKS.

AzureIngressProhibitedTarget manifest file

By default, for any configuration of a hostname not declared in this type of object, AGIC will remove the configuration in seconds.

Conclusions

In this article, using a Helm chart, we implemented a Kubernetes cluster with all the necessary resources to implement a Multi-Cluster/Shared Application Gateway Ingress Controller for different endpoints and namespaces.

With this implementation of an Application Gateway, we will have more control over our Ingress controller, allowing multiple Ingress for different namespaces using native and supported solutions from Microsoft. This kind of implementation helps reduce costs or to control development environments.

This scenario is also valid when we must migrate applications to containers that need to run together but with different endpoints and certificates, configuring them using annotations with key/value pairs from an Ingress resource.

References

--

--