Ingress : On Two-Tier Applications , Wordpress-MySQL and Gogs-Postgres | Ingress On Canary deployment !!

Anirudhadak
7 min readJul 5, 2024

--

What is Ingress ?

Ingress exposes HTTP and HTTPS routes from outside the cluster to service within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

Ingress is an API object that manages external access to services within a cluster, typically HTTP and HTTPS routes. It provides a way to define rules for routing traffic to different services based on the hostname, URL path, or other HTTP attributes. Ingress controllers are responsible for fulfilling the Ingress rules by configuring the underlying network load balancers or proxies.

Key Concepts of Ingress

  1. Ingress Resource: This is the YAML configuration file where you define the routing rules. It specifies how to route external HTTP/S traffic to services within your Kubernetes cluster.
  2. Ingress Controller: A daemon responsible for implementing the Ingress rules. Examples include NGINX, Traefik, and HAProxy. The Ingress controller watches for changes to Ingress resources and configures the underlying proxy to ensure traffic is routed accordingly.

Why Use Ingress?

  • Consolidated Entry Point: Provides a single entry point for all HTTP/S traffic, making it easier to manage and secure.
  • Advanced Routing: Supports complex routing rules based on hostnames, paths, headers, etc.
  • Load Balancing: Distributes traffic across multiple backend services.

Let’s See the Practical Demonstration

Prerequisites :

Required K8s Cluster Lab like Killerkoda: https://killercoda.com/playgrounds/scenario/kubernetes

Ingress controller : kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml

Let’s Start with 2-tier Applications

Step1 : Create first Ingress controller using the ingress-nginx controller url for implementing the Ingress rules which are define in the ingress Yaml file.

checkout the ingress namespace and the pods and service are created successfully in that namespace.

Step2: create Backend Database Deployment with services for the Frontend application , with the Frontend deployment with service.

Step 3: Create Frontend Application deployment service and create ingress

Step 4: get ingress service from ingress-nginx namespace then get host IP and curl <public.ip>:<svc -n ingress-nginx NodePort>/app3 or /app1 or /app2 , here the path is not provided so it gives not found error !

Step5: We can See here curl <hostIP>:<ingress 80:<NodePort>/path it not throws error. wordpress is successfully accessisble. and on /gogs path it gives ‘<a href=”/install”>Found</a>’ this gogs service also accessible successfully.

Step6: Same way here curl with the Service IP and Port no. for Gogs and Wordpress. the output get successfully like ingress loadbalancer.

This is the Rule Yaml file , here define the rules for this two-tier applications for frontend backend both.

Yaml Files :

Mysql yaml

apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mydb
name: mydb
spec:
replicas: 1
selector:
matchLabels:
app: mydb
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mydb
spec:
containers:
- image: docker.io/mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: centos
- name: MYSQL_DATABASE
value: simplilearn
resources: {}
status: {}

Wordpress yml

apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: wp
name: wp
spec:
replicas: 1
selector:
matchLabels:
app: wp
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: wp
spec:
containers:
- image: docker.io/wordpress
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: mydb
- name: WORDPRESS_DB_PASSWORD
value: centos
- name: WORDPRESS_DB_USER
value: root
- name: WORDPRESS_DB_NAME
value: simplilearn
resources: {}
status: {}

Postgres yml

apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: postgres
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: postgres
spec:
containers:
- image: centos/postgresql-96-centos7
name: postgresql-96-centos7
env:
- name: POSTGRESQL_ADMIN_PASSWORD
value: password
- name: POSTGRESQL_USER
value: user1
- name: POSTGRESQL_PASSWORD
value: userpass
- name: POSTGRESQL_DATABASE
value: database1
ports:
- containerPort: 3306
resources: {}
status: {}

Gogs yml

apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: gogs
name: gogs
spec:
replicas: 1
selector:
matchLabels:
app: gogs
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: gogs
spec:
containers:
- image: docker.io/gogs/gogs
name: gogs
env:
- name: DB_TYPE
value: postgres
- name: HOST
value: postgres
- name: NAME
value: database1
- name: USER
value: user1
- name: PASSWD
value: userpass

resources: {}
status: {}

ingress yml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mydep-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /mydb
pathType: Prefix
backend:
service:
name: mydb
port:
number: 3306
- path: /wp
pathType: Prefix
backend:
service:
name: wp
port:
number: 80


- path: /postgres
pathType: Prefix
backend:
service:
name: postgres
port:
number: 5432
- path: /gogs
pathType: Prefix
backend:
service:
name: gogs
port:
number: 3000

Canary Deployment

A canary deployment is a strategy used in software development and release management to roll out new features or versions of an application gradually to a small subset of users before deploying it to the entire user base. This method allows developers to test the new version in a production environment and monitor its performance, functionality, and user feedback without impacting all users. If any issues are detected, they can be addressed before the full rollout.

Key Benefits of Canary Deployment

  1. Risk Mitigation: By exposing only a small portion of users to the new version, the potential impact of bugs or performance issues is minimized.
  2. Incremental Testing: Real-world usage provides more accurate testing and feedback compared to staging or QA environments.
  3. User Feedback: Early feedback from a subset of users can help refine and improve the release.
  4. Easy Rollback: If issues are detected, rolling back the changes is simpler and less disruptive.

How Canary Deployment Works

  1. Deploy a New Version: Deploy the new version of your application alongside the stable version.
  2. Route a Small Percentage of Traffic: Gradually route a small percentage of user traffic to the new version.
  3. Monitor Performance and Metrics: Continuously monitor the performance, error rates, and user feedback of the new version.
  4. Gradual Rollout: If the new version performs well, incrementally increase the traffic directed to it until all traffic is using the new version.
  5. Rollback if Necessary: If any significant issues are detected, quickly rollback to the stable

Let’s See the Practically Canary deployment

Setup ingress-nginx controller on k8s cluster. check the all resource are created successfully.

Create Two Deployments that are accessible on same port no. 80 , along with that create one service for both deployments target port is 80. to assign one service to two deployments used match labels and selector for deployments and service in yaml file.

create index.html file with some content scale the deployment called canary deployment and copy the index.html file in deployment pods using kubectl cp command.

check the output using service ip with port no. using curl command in for loop. out of 10 which deployment pod is access.

now create ingress for this canary deployment as we know the service is one for both deployments define one rule with one service path in ingress rule.

now curl with the host ip , ingress service NodePort no. without giving path after NodePort it thorws error 404 not found.

provide service path and checks that the ingress Load Balancer is happening.

check the output of ingress LoadBalancer in For Loop using curl command.

This are canary deployment yaml files

service yaml file , with ingress rule yaml file.

We can see the output using service nodeport the canary load balancing.

refersh !!!

OutPut gets change!

Conclusion

Implementing canary deployments using Ingress in Kubernetes is a highly effective method for managing and controlling the rollout of new application versions. By leveraging the capabilities of Ingress controllers and advanced traffic routing mechanisms, organizations can deploy updates in a safe, incremental manner, significantly reducing the risk of widespread disruptions.

Keep Learning , Keep Growing and keep Exploring

#DevOps #AWS #Kubernetes #K8s #Wordpress #Mysql #Gogs #Postgres #LoadBalancing #Ingress #nginx #Canary #deployment

That’s all for today! Thank you for your Valuable time !!

--

--

Anirudhadak

DevOps Enthusiast | AWS | Docker | Kubernetes | OpenShift | Linux | Git | Jenkins | CI/CD |