Using nginx-ingress controller to restrict access by IP (ip whitelisting) for a service deployed to a Kubernetes (AKS) cluster

Maninderjit (Mani) Bindra
3 min readMay 19, 2018

--

While working on a project earlier this week we were given the following requirements :

  1. Create a Managed Kubernetes cluster (AKS) on Azure, in a existing Azure Subnet using an ARM template / az commands
  2. For a service deployed to the AKS cluster, restrict access to certain client source IPs (Ip Whitelisting)
  3. For a service deployed to the AKS cluster auto provision Letsencrypt TLS certificates. You can follow the post for this

This post details point 2 above. I will be creating a separate post for point 1 and 3 above, in the near future.

The rest of this post assumes that the AKS Kubernetes cluster is available, you have helm installed, and we have already executed the helm init command.

Create or update the nginx-ingress controller

The first thing we do now is install the inginx-ingress controller using helm. The github page for the nginx-ingress controller helm chart is at nginx-ingress. The install command to be used is :

Helm Repo Initialization

$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ helm repo update

Ingress controller install command

$ helm install ingress-nginx ingress-nginx/ingress-nginx  --set controller.service.externalTrafficPolicy=Local

The default value of controller.service.externalTrafficPolicy in the nginx ingress helm chart is ‘Cluster’, we need to change this value to ‘Local’. With the default value of ‘Cluster’ the ingress controller does not see the actual source ip from the client request but an internal IP. After setting this value to ‘Local’ the ingress controller gets the unmodified source ip of the client request.

Before we apply the ingress rule with source ip whitelisting for a service, let us create a sample web app deployment and service:

Create the the hello world web server deployment and service to test the whitelisting

Create Deployment

$ kubectl run web --image=tutum/hello-world --port=80 

Expose the deployment as a service

$ kubectl expose pod web --port=80

Check if service and pod is up

$ kubectl get svc webNAME      CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGEweb       10.3.30.224   <none>        80/TCP    5m$ kubectl get pod -l=run=webNAME                   READY     STATUS    RESTARTS   AGEweb-5bff8ffd8c-twxwp   1/1       Running   0          17m

Apply the ingress IP Whitelisting rule for the service

The annotation ( nginx.ingress.kubernetes.io/whitelist-source-range )we need to apply to the kubernetes ingress resource using nginx-ingress is detailed at nginx-ingress.

We now create the ingress rule

Ingress YAML file

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/whitelist-source-range: 49.36.X.X/32
# # depending on the ingress controller version the annotation above may need to be modified to remove the prefix nginx. i.e. ingress.kubernetes.io/whitelist-source-range: 49.36.X.X/32spec:
spec:
rules:
- host: web.manitestdomain.com
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: web
port:
number: 80

Apply the ingress rule

$ kubectl apply -f web-ingress-whitelist.yaml

Thats it whitelisting is in place.

Testing and Debugging the whitelisting rules

Before steps below were executed dns configurations were modified so that the domain manitestdomain.com was pointing to the public ip of the nginx-ingress controller service.

Request from whitelisted ip

$ curl ipinfo.io/ip
49.36.X.X
$ curl -I web.manitestdomain.com
HTTP/1.1 200 OK
Server: nginx/1.13.5
Date: Sat, 19 May 2018 06:07:01 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
X-Powered-By: PHP/5.6.14

Request from Non Whitelisted ip

$ curl ipinfo.io/ip223.Y.Y.Y$ curl -I web.manitestdomain.com
HTTP/1.1 403 Forbidden
Server: nginx/1.13.5
Date: Sat, 19 May 2018 06:15:47 GMT
Content-Type: text/html
Content-Length: 169
Connection: keep-alive

Checking nginx-ingress controller logs

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ingress-nginx-ingress-controller-586c47b885-rxm72 1/1 Running 0 1d
nginx-ingress-nginx-ingress-default-backend-65f4cd97fb-sbh7c 1/1 Running 0 1d
web-5bff8ffd8c-twxwp
$ kubectl logs nginx-ingress-nginx-ingress-controller-586c47b885-rxm72 -f..
..
..
49.36.X.X - [49.36.X.X] - - [19/May/2018:06:11:21 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.52.1" 87 0.002 [default-web-80] 10.2.1.7:80 0 0.002 200

Thanks for reading this post. I hope you liked it. Please feel free to write your comments and views about the same over here or at @manisbindra.

It has been pointed to me by @brunzefb in his tweet that there may be an issue when using externalTrafficPolicy=Local in more recent versions of nginx along with AWS ELB. As per his request I am including a link to the relevant stack overflow post : https://stackoverflow.com/questions/66648243/deploying-ingress-nginx-controller-elb-in-eks-cluster-with-multiple-nodes .

--

--

Maninderjit (Mani) Bindra

Gopher, Cloud, Containers, K8s, DevOps | LFCS | CKA | CKS | Principal Software Engineer @ Microsoft