How to Redirect Unreachable Traffic to Custom Error Pages in Kubernetes Using Nginx

Mehmet kanus
Hedgus
Published in
6 min readJul 25, 2024

In the fast-paced digital world, providing a seamless user experience is essential, even when your website faces downtime. Default error pages can be confusing and off-putting to users. Instead, redirecting traffic to a custom error page or a backend service can improve user satisfaction and convey important information clearly. This article focuses on setting up custom error pages in Kubernetes using Nginx Ingress Controller, ensuring that users see your specified error page or are redirected to a backend service when your application becomes unreachable.

Why Use Custom Error Pages or Backend Services?

When users encounter an unreachable application, default error pages can lead to frustration and confusion. Custom error pages and backend services provide tailored messages that can guide users, offer alternative actions, or simply inform them about the issue in a user-friendly manner. This enhances the overall user experience, maintains brand consistency, and reduces user frustration during downtimes.

Purpose of This Guide

This guide aims to provide a step-by-step approach to configuring custom error pages and backend services in Kubernetes with Nginx Ingress Controller. By following these instructions, you will learn how to set up your Kubernetes cluster to automatically redirect users to a specified error page or a backend service when your primary application is down. This setup is crucial for maintaining a professional and user-friendly interface, even in the face of application failures.

Prerequisites

  • Kubernetes Cluster: Ensure you have a Kubernetes cluster set up.
  • Nginx Ingress Controller: Nginx should be installed and configured as your ingress controller.
  • Cert-Manager: Ensure cert-manager is installed and configured to manage TLS certificates automatically.
  • Primary Application: Your main application deployed in the cluster.
  • Backup Service: A secondary service to handle requests when the primary application is unreachable.

Step-1: First, ensure your Kubernetes cluster is up and running. If you haven’t set it up yet, you can do so using a cloud provider of your choice (e.g., Azure AKS, AWS EKS, GKE). I will be setting up the Kubernetes cluster on Azure AKS.

az group create --name rg-mkanus --location westus

az aks create --resource-group rg-mkanus --name mkanus --location westus --kubernetes-version 1.30.0 --tier standard --enable-cluster-autoscaler --min-count 2 --max-count 3 --max-pods 110 --node-vm-size Standard_D8ds_v5 --network-plugin azure --network-policy azure --load-balancer-sku standard --generate-ssh-key

Step-2: Nginx Ingress Controller: Nginx should be installed and configured as your ingress controller. This involves setting up Nginx to manage and route external traffic to the appropriate services within your Kubernetes cluster, ensuring secure and efficient traffic handling.

helm repo add nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm upgrade --install ingress-nginx nginx/ingress-nginx --create-namespace --namespace nginx --set controller.service.externalTrafficPolicy="Local",controller.allowSnippetAnnotations=true

Step-3: Cert-Manager: Ensure cert-manager is installed and configured to manage TLS certificates automatically. This tool helps automate the issuance and renewal of TLS certificates, ensuring your applications maintain secure HTTPS connections without manual intervention.

helm repo add jetstack https://charts.jetstack.io
helm repo update
helm upgrade --install cert-manager jetstack/cert-manager --create-namespace --namespace cert-manager --set crds.enabled=true,crds.keep=false --version v1.15.1

# kubectl apply -f clusterissuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: mehmetkanus17@gmail.com
server: 'https://acme-v02.api.letsencrypt.org/directory'
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx

Step-4: Primary Application: Your main application deployed in the cluster. This is the core service that your users interact with, hosted within the Kubernetes environment to leverage its scalability and orchestration capabilities.

# kubectl apply -f product.yaml

apiVersion: v1
kind: Namespace
metadata:
name: product
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: shopping
name: shopping
namespace: product
spec:
replicas: 1
selector:
matchLabels:
app: shopping
template:
metadata:
labels:
app: shopping
spec:
containers:
- image: ghcr.io/mehmetkanus17/shopping:latest
name: shopping
---
apiVersion: v1
kind: Service
metadata:
labels:
app: shopping
name: shopping-svc
namespace: product
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: shopping
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: product
name: shopping-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- shop.mehmetkanus.com
secretName: tls-shopping
rules:
- host: shop.mehmetkanus.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: shopping-svc
port:
number: 80
  • We have deployed the product application to Kubernetes. Let’s create an A record in the DNS provider for the generated ingress host name and check if our site is accessible.

Step-5: Backup Service: A Backup Service acts as a secondary support system that takes over handling requests if the primary application becomes unavailable. This ensures continuous availability and reliability of services by redirecting traffic to the backup system during downtime or failures of the main application. Let’s deploy the backend service and application now.

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: backend
name: backend
namespace: product
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- image: ghcr.io/mehmetkanus17/backend:latest
name: backend
---
apiVersion: v1
kind: Service
metadata:
labels:
app: backend
name: backend-svc
namespace: product
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: backend

Step-6: To ensure access to the backend service if the main application is unreachable, we need to make two updates. First, we’ll modify the NGINX ingress controller deployment to specify the default backend service. Then, we’ll update the ingress YAML file to handle custom HTTP error responses by redirecting them to the backend service.

  1. Edit the NGINX Ingress Controller Deployment:
# Update the deployment specification to include the product/backend service.
# k edit -n nginx deploy ingress-nginx-controller

spec:
containers:
- args:
...
...
...
- --default-backend-service=product/backend-svc # namespace/svc_name

2. Update the Ingress YAML File: Add the following defaultBackend block to the ingress.yaml file and reapply the file.

# Add the defaultBackend block to specify the backend service
spec:
ingressClassName: nginx
defaultBackend:
service:
name: backend-svc
port:
number: 80
  • Additionally, add custom HTTP error handling annotations to ingress.yaml file.
annotations:
nginx.ingress.kubernetes.io/custom-http-errors: "400,401,402,403,404,405,406,415,500,501,502,503,504,505"
  • These configurations ensure that requests are redirected to the backend service if the main application is unreachable or encounters specified HTTP errors.

Last-Step: For this scenario, let’s assume we cannot access our main application. To simulate this, either remove the main application service or set the main application’s deployment replicas to 0. After making these changes, attempt to access our site again and observe what happens.

Thank you for adding my article to your reading list! If you enjoyed it and found it helpful, please consider following me and giving the article a clap. Your support means a lot and helps me continue creating content that you love.

Thanks again, and happy reading!

--

--