Domain name-based routing on AKS (Azure Kubernetes Service) using ingress, cert-manager and External DNS

DevOps Guru
8 min readJun 8, 2023

--

  1. Install Azure CLI. For more info, click here. Check version of Azure Cli.
az --version

2. Log on to Azure.

az login

3. AKS Cluster: Create AKS cluster and connect to cluster. For detail command, click here.

Azure Kubernetes Service (AKS) simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks, like health monitoring and maintenance. When you create an AKS cluster, a control plane is automatically created and configured. This control plane is provided at no cost as a managed Azure resource abstracted from the user. You only pay for and manage the nodes attached to the AKS cluster.

Create cluster using Azure Portal:

Resource-group: aks-rg

Cluster name: aks-cluster

scale method: manual

node count: 1

Networking config: Azure CNI

Newtork policy: Azure

Create cluster using Azure CLI:

# create resource group with name: aks-rg
az group create --location eastus --name aks-rg

# create AKS cluster with name: aks-cluster
az aks create --name aks-cluster \
--resource-group aks-rg \
--node-count 1 \
--network-plugin azure \
--enable-managed-identity \
--generate-ssh-keys

# connect to cluster
az aks get-credentials --name aks-cluster --resource-group aks-rg

4. Static IP: Standard SKU public IP is recommended for production workloads. Create static public IP and associate it to ingress controller during installation.

# Get nodeResourceGroup of the AKS cluster 
az aks show --resource-group aks-rg --name aks-cluster --query nodeResourceGroup -o tsv

az network public-ip create \
--resource-group <nodeResourecGroup> \
--name myPubIP \
--sku Standard \
--query publicIp.ipAddress -o tsv

5. Ingress Controller: Install ingress controller using helm.

5.1) Install helm. For detail, click here.

brew install helm {for macOS}
choco install kubernetes-helm {for windows)

#check version of helm
helm version

5.2) Create separate namespace for ingress resources.

kubectl create ns ingress

#get namespaces
kubectl get ns

5.3) Add helm repo. For more info, click here.

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

5.4) Install chart: ingress-nginx and assign static public IP to it. You can override default values by using: — — set command or values.yml file.

@using set command

helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress \
--set controller.replicaCount=2 \
--set controller.nodeSelector."kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
--set controller.service.externalTrafficPolicy=Local \
--set controller.service.loadBalancerIP="REPLACE_WITH_PUBLIC_IP_FROM_STEP4"

@using values.yml file

Mention values you want to override the default values in values.yml file and run helm install command as below.

#values.yml file
controller:
replicaCount: 2
service:
loadBalancerIP: "REPLACE_WITH_PUBLIC_IP_FROM_STEP4"
nodeSelector:
{"kubernetes\.io/os": "linux"}
service:
externalTrafficPolicy: Local
defaultBackend:
nodeSelector:
{"kubernetes\.io/os": "linux"}
helm install ingress-nginx ingress-nginx/ingress-nginx -n ingress -f values.yml

5.5) Get pod, service, deploy in ingress namespace.

kubectl get po, svc, deploy -n ingress

5.6) Get public IP and paste in browser. output: 404 Not Found from Nginx. It is working as expected.

6) DNS: Purchase domain and create DNS zone.

6.1) Purchase domain from one of the domain registrars.

  • GoDaddy
  • Namecheap
  • AWS Route 53
  • Google domains
  • Gandi
  • Wix
  • Cloudflare
  • …….

6.2) Create DNS Zone in Azure portal in separate resource group: dns-RG. This will provide 4 azure name servers.

Create DNS Zone

Name server after creating DNS zone in Azure:

Name server 1: ns1–10.azure-dns.com.
Name server 2: ns2–11.azure-dns.net.
Name server 3: ns3–12.azure-dns.org.
Name server 4: ns4–13.azure-dns.info.

6.3) Update name servers at your domain provider (e.g. AWS route53, Namecheap). Login to domain provider page, choose your domain and hit manage. Go to nameservers and update nameservers with azure nameservers.

6.4) Verify nameserver update.

nslookup -type=NS <yourDomainName>

7) ExternalDNS: To create/update record sets in Azure DNS from AKS, ExternalDNS need permission to Azure DNS. Permission is provided via user managed identity.

7.1) Create user managed identity in azure portal in the same resource group where your aks cluster reside (i.e. aks-rg).

7.2) Role assignment to managed identity.

  • Open MSI — → managed-identity-externaldns-access-to-dnszones
  • Click on Azure Role Assignments -> Add role assignment

Make a note of client ID and update later in azure.json file.

7.3) Associate MSI in AKS Cluster VMSS

  • Go to All Services -> Virtual Machine Scale Sets (VMSS) -> Open aks-cluster related VMSS (aks-agentpool-12345687-vmss)
  • Go to Settings -> Identity -> User assigned -> Add -> managed-identity-externaldns-access-to-dnszones

7.4) ExternalDNS expects to find MSI credentials in a JSON file called azure.json saved as a Kubernetes secret. Create azure.json file, update clientID from step 7.2.

{
"tenantId": "Your Tennant ID",
"subscriptionId": "Your Subscription ID",
"resourceGroup": "dns-RG",
"useManagedIdentityExtension": true,
"userAssignedIdentityID": "ClientID from step 7.2"
}

7.5) Create K8S secret using azure.json file.

$ kubectl create secret generic azure-config-file --from-file=azure.json

7.6) ExternalDNS pod looks for above secret in /etc/kubernetes folder. So, pod volumeMounts path: /etc/kubernetes and secret: <SecretName> and pod Volumes: secret: <SecretName>. See ExternalDNS manifest and apply manifest.

apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods", "nodes"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.11.0
args:
- --source=service
- --source=ingress
#- --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.
- --provider=azure
#- --azure-resource-group=externaldns # (optional) use the DNS zones from the specific resource group
volumeMounts:
- name: azure-config-file
mountPath: /etc/kubernetes
readOnly: true
volumes:
- name: azure-config-file
secret:
secretName: azure-config-file #name of secret created using azjure.json file in step 7.5
$ Kubectl apply -f externaldns.yml

o/p:
serviceaccount/external-dns created
clusterrole.rbac.authorization.k8s.io/external-dns created
clusterrolebinding.rbac.authorization.k8s.io/external-dns-viewer created
deployment.apps/external-dns created

$ kubectl get pod
o/p:
externalDNSpod is running....

7.7) Verify externalDNS pod logs.

kubectl logs -f <externalDNSpodname>

8) Ingress-TLS (cert-manager): cert-manager runs within your Kubernetes cluster as a series of deployment resources. It utilizes CustomResourceDefinitions (CRD) to configure Certificate Authorities and request certificates. cert-manager requires a number of CRD resources to be installed into your cluster as part of installation. This can either be done manually, using kubectl, or using the installCRDs option when installing the Helm chart. cert-manager can be installed using regular YAML manifests or using helm. More info about cert-manager, click here.

8.1) Install cert-manager using helm repo: jetstack/ chart: cert-manager. For more info, click here.

8.1.1) Label the ingress namespace to disable resource validation.

kubectl label namespace ingress cert-manager.io/disable-validation=true

8.1.2) Add the jetstack Helm repository.

helm repo add jetstack https://charts.jetstack.io
helm repo update

8.1.3) Install the cert-manager Helm chart.

helm install \
cert-manager jetstack/cert-manager \
--namespace ingress \
--version v1.12.1 \
--set installCRDs=true

8.2) Verify Cert Manager pods/service.

kubectl get pods, svc --namespace ingress

cert-manager is deployed successfully. In order to begin issuing certificates, you will need to setup clusterissuer or issuer resource. Clusterissuer and issuer are K8S resources that represent CA that are able to generate signed certificates by honoring certificate signing requests. Issuer is a namespaced resource. Clusterissuer can be used to issue cert across all namespace.

8.3) Create clusterissuer (K8S manifest).

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email used for ACME registration
email: <youremail>
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx

8.4) Deploy cluster issuer

Kubectl apply -f clusterissuer.yml

o/p:
clusterissuer.cert-manager.io/letsencrypt created

8.5) List Cluster Issuer

kubectl get clusterissuer

8.6) Describe Cluster Issuer

kubectl describe clusterissuer letsencrypt

9) Review your application manifest. Use publicly available docker image: anildevops21/app1:1.0 and anildevops21/app2:1.0.

Yaml files:

app1-deploy.yml and app1-clusterIP.yml

app2-deploy.yml and app2-clusterIP.yml

apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-deployment
labels:
app: app1
spec:
replicas: 1
selector:
matchLabels:
app: app1
template:
metadata:
labels:
app: app1
spec:
containers:
- name: app1
image: anildevops21/app1:1.0
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: app1-clusterip-service
labels:
app: app1
spec:
type: ClusterIP
selector:
app: app1
ports:
- port: 80
targetPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: app2-deployment
labels:
app: app2
spec:
replicas: 1
selector:
matchLabels:
app: app2
template:
metadata:
labels:
app: app2
spec:
containers:
- name: app2
image: anildevops21/app2:1.0
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: app2-clusterip-service
labels:
app: app2
annotations:
spec:
type: ClusterIP
selector:
app: app2
ports:
- port: 80
targetPort: 80

10) Review your ingress-manifest

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-ssl
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: letsencrypt
spec:
rules:
- host: app1.anildevops.homes
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app1-clusterip-service
port:
number: 80
- host: app2.anildevops.homes
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app2-clusterip-service
port:
number: 80
tls:
- hosts:
- app1.anildevops.homes
secretName: sapp1-secret
- hosts:
- app2.anildevops.homes
secretName: sapp2-secret

11) Apply all of 5 above manifests.

$ kubectl apply -f app1-deploy.yml
....

o/p:
service/app1-clusterip-service created
service/app2-clusterip-service created
deployment.apps/app1-deployment created
deployment.apps/app2-deployment created
ingress.networking.k8s.io/ingress-ssl created

12) Watch for pod/certificate creation. Certificate must be in True (not False) status.

kubectl get pod

# Verify Cert Manager Pod Logs
kubectl get pods -n ingress-basic
kubectl logs -f <ingresspodname> -n ingress
# Verify SSL Certificates (It should be True)
kubectl get certificate

13) Access your application using your domain name with https://.

https://app1.anildevops.homes

app 1 content

https://app2.anildevops.homes

app2 content

HAPPY LEARNING!!!

--

--