NGINX Ingress Controller on Kubernetes

karrier.io
Sep 9, 2018 · 5 min read

In this tutorial we will walk through deploying the community edition nginx-ingress-controller to provide L7 HTTP load balancing of Kubernetes Ingress resources. For a little background, an Ingress in Kubernetes is essentially the declaration of a L7 HTTP Load Balancer and its associated rules, but before you can make use of them, you need an Ingress Controller. That’s where the nginx-ingress-controller comes in, it watches the Kubernetes API for Ingress resources and satisfies them with the battle tested and feature rich NGINX L7 load balancer.

For HTTP based services, an L7 load balancer can provide a number of benefits and features over there L4 counterparts, such as sharing the cost of a single load balancer across multiple services, path based routing, TLS termination, and TLS passthrough.

Prerequisites

To follow along with this tutorial you can use any conformant Kubernetes cluster but we will use Karrier, which is our own hosted solution. With Karrier you get immediate access to pre-built and fully managed Kubernetes clusters around the globe. Visit karrier.io to learn more.

1. Create the Kubernetes manifest

First define a few ConfigMaps that our Ingress Controller requires to hold its configuration.

---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
labels:
app: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
labels:
app: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
labels:
app: ingress-nginx

Next define a ServiceAccount, Role, and RoleBinding to provide the ingress controller with the minimum permissions needed to perform its function.

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
labels:
app: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: nginx-ingress-role
labels:
app: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
verbs:
- get
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- ingress-controller-leader-nginx
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: nginx-ingress-role-binding
labels:
app: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount

Now define a Deployment for your default backend. When the ingress controller is unsure of the intended destination of a given request, it will route the request to this backend. It’s important to note here that the default backend can be any application so long as it serves a ‘404’ page at ‘/’ and ‘200’ response at ‘/healthz’.

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-http-backend
labels:
app: default-http-backend
spec:
replicas: 1
selector:
matchLabels:
app: default-http-backend
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: gcr.io/google_containers/defaultbackend:1.4
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi

You will also need to define a Service to front the default backend.

---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
labels:
app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: default-http-backend

Now define a Deployment to manage the ingress controller. There are few important things to call attention to here.

  1. The “serviceAccountName” in the template pod spec matches the ServiceAccount we defined above. If they don’t match the the ingress controller won’t be granted with the permissions needed to operate.
  2. The “livenessProbe” and “readinessProbe” are both configured to watch the ingress controllers ‘/healthz’ endpoint. This ensures only healthy ingress controller pods will be in circulation at any given time and any unhealthy ones will be restarted.
  3. By setting the “–watch-namespace=$(POD_NAMESPACE)” flag via the containers “command” field you are configuring the ingress controller to only watch for Ingress resources in its own namespace.
  4. To make this manifest as reusable as possible the namespace name is dynamically populated by Kubernetes Downward API. If your interested to learn more about this feature, official documentation can be found here.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
labels:
app: ingress-nginx
spec:
replicas: 3
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- ingress-nginx
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: karrier/nginx-ingress-controller:0.19.0
command:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
- --enable-ssl-passthrough
- --force-namespace-isolation=true
- --http-port=8080
- --https-port=8443
- --ssl-passthrough-proxy-port=8442
- --watch-namespace=$(POD_NAMESPACE)
securityContext:
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1

Almost there, but assuming your Kubernetes cluster is enforcing NetworkPolicies you will need to define two — one to allow all traffic into your ingress controller and another for your default backend.

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-nginx-network-policy
labels:
app: ingress-nginx
spec:
podSelector:
matchLabels:
app: ingress-nginx
ingress:
- {}

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-http-backend-network-policy
spec:
podSelector:
matchLabels:
app: default-http-backend
ingress:
- {}

Finally lets define a Service to front this deployment. Setting the type value to “LoadBalancer” in the Service spec will ensure Kubernetes provisions a load balancer with a public IP.

---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
labels:
app: ingress-nginx
spec:
type: 'LoadBalancer'
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https

2. Submit the manifest to Kubernetes

With the manifest written and presumably saved into your local directory as “nginx-ingress-controller.yaml” run the following command to submit this manifest into your Kubernetes cluster.

kubectl apply -f nginx-ingress-controller.yaml

3. Check the Ingress Controller is running

Run the following command and wait for the Pods to enter the running state.

kubectl get pods -l app=ingress-nginx -w 

Check the ingress controllers logs for any errors.

kubectl logs -l app=ingress-nginx

Last but not least, note down the external IP of the load balancer fronting your ingress controllers. Use this IP as the value of any DNS records destined for your cluster.

kubectl get service nginx-ingress-controller

That’s it

You should now have an ingress controller in place. From this point on, any ingress resources submited into your Kubernetes Namespace will be satisfied by this controller in the form a NGINX L7 HTTP load balancer. In future tutorials we will make use of Kubernetes Ingress resources and address security in our Minio tutorial with automatic TLS certificates from Lets Encrypt, and play with automating external DNS records in our ExternalDNS tutoral.


Originally published at karrier.io.

karrier.io

Written by

At Karrier.io we provide instant access to secure, high performance Kubernetes clusters around the world, using granular metering and simple pricing.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade