End-to-End Kubernetes Deployment: EC2 Instances, Docker, Kubernetes Cluster, Metrics-Server, Helm, and Auto-Scaling with Apache Web Server.

Introduction:

This project covers various essential components, starting with setting up the infrastructure using EC2 instances on AWS, containerizing the application with Docker, and creating a Kubernetes cluster for efficient orchestration.

In addition, the deployment includes the installation of the Metrics Server, enabling real-time monitoring of resource usage and performance metrics within the Kubernetes cluster. Helm, a powerful package manager for Kubernetes, is utilized to streamline the deployment process by managing charts and configurations.

Furthermore, the project showcases the implementation of auto-scaling, allowing the Kubernetes cluster to dynamically adjust the number of Apache Web Server replicas based on incoming traffic and resource demands, ensuring optimal performance and resource utilization. This guide provides a detailed, step-by-step approach to deploying and managing a Kubernetes-based application, offering insights into each stage of the process and demonstrating how the use of these technologies enables seamless and scalable application deployment.

Connect With Me:

GitHub : https://github.com/lakshmiyagnanandareddy
Linked-in : www.linkedin.com/in/lakshmi-yagnananda-reddy-mudireddy
Medium: https://medium.com/@mlynreddy
G-Mail : mlynreddy@gmail.com

Medium: https://medium.com/@mlynreddy
G-Mail : mlynreddy@gmail.com

Launching Ec2 instance:

I’m launching an EC2 instance on AWS with the name kubernetes’.

Security Group allowing inbound traffic for SSH, HTTP, HTTPS, and all traffic:

This is the summary launching ec2 instances:

Connect Ec2-instance with Putty by using username and private-key pair:

# sudo su - ( used to go to root user)
we need root previliges to run docker commands.

Docker: Docker is a open-source containerization tool.

# yum install docker -y — This cmd is used to install docker.

# “systemctl enable docker --now” — This cmd is used to start docker service on linux.

Enable: Used to start service automatically at boot time.

WinSCP:

WinSCP is the open-source tool used to securely file transfer between local and remote computers.

Here I’m transferring the Dockerfile, service.yaml, webdeployment.yaml, webserverAutoScaling.yaml, webserverResourceQuota.yaml and helm directory which contains Chart.yaml, values.yaml, templates directory to the Ec2 instance path:/home/ec2-user.

This is the Dockerfile of my image :

Here I’m:

FROM debian” — using debain as a Base OS.
RUN apt-get -y update && apt-get -y install apache2 net-tools” — I’m updating repository and installing apache web server (httpd) and installing net-tools.
RUN mkdir -p /var/www/html” — I’m creating the directory of html in the working directory of /var/www.
RUN echo “<h1>Lakshmi reddy’s webserver </h1>” > /var/www/html/index.html” — I’m adding heading tag with Lakshmi reddy’s webserver to index.html file.
RUN echo ‘<pre style=”background-color: lightblue;”>’ >> /var/www/html/index.html
RUN echo “`ifconfig`</pre>” >> /var/www/html/index.html
” — I’m adding ifconfig content with background:color lightblue to index.html.
RUN echo “ServerName `hostname -I`” >> /etc/apache2/apache2.conf” — I’m adding servername to the apache2.conf file which is necessary to start apache server.
EXPOSE 80 “ — I’m exposing apache web server port with number 80.
CMD [“apachectl”, “-D”, “FOREGROUND”]” — I’m using command starting apache2 web server.

I’m creating image from Dockerfile.
# docker build -t nandu9948/apachewebserver . — This command is used to create an image from Dockerfile.

“-t” — stands for tag and used to specify image name for which you are going to create and image.
“nandu9948/apachewebserver” — is the image name.
“.” — Which indicates the path of the Dockerfile.

Dockerhub :

Docker Hub is a cloud-based registry service provided by Docker that allows you to store, manage, and distribute Docker images.

Now, We have to login into dockerhub to push this image to the dockerhub:
# docker login — used to login to the dockerhub.
username:__________ — have to give dockerhub username.
Password: __________ — have to give dockerhub password.

I’m pushing nandu9948/apacheserver image to dockerhub
# docker push nandu9948/apacheserver — used to push the image to Dockerhub.

SETUP K8S Master Node:

By default kubernetes repository is not their in linux OS.
So, I’m adding kubernetes repository:

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

After adding kuberntes repository,
# sudo yum install -y kubelet kubeadm kubectl — disableexcludes=kubernetes — used to install kubelet, kubeadm, kubectl.

To start kubelet service:

# systemctl enable kubelet --now

# kubeadm config images pull — is used to pull the required container images for setting up a Kubernetes cluster like api-server, scheduler, etcd etc..

# yum install iproute-tc -y — used to install iproute-tc.
# kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem — to start k8s with pod networkrange 10.244.0.0/16 and ignore preflight checks of mem and cpu.

Have to run this commands to run pods on this cluster:

We need one CNI plugin to maintain overlay network:
I’m using waevenet as CNI plugin:
To install waevenet :
# kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml

SETUP K8S Slave Node:

# yum install docker -y
# “systemctl enable docker — now”

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

# sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# systemctl enable kubelet --now
#yum install iproute-tc -y

We have to run this command in the master node :

# kubeadm token create — print-join-command — Used to get join command to add slave node to the master.

Have to run this join command in the slave node:

Have to run the same commands to setup slave node2 as slave node1.

# kubectl get nodes — used to see the list of k8s nodes.
Here you can verify the slave-node-ip is same as k8s nodes(172–31–35–164).

I’m moving hemlwebserver directory from ec2-user to root:

We need metrics-server to get cpu usage of the pods:

To install metrics-server:
# kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

Trouble-shot:

There will be error occur due to “Readiness probe failed:HTTP PROBE failed with stat”.

Solution:

# kubectl edit deploy metrics-server -n kube-system
Add “ — kubelet-insecure-tls” in the arguments — to run metrics-server pods.

Install helm:

# wget https://get.helm.sh/helm-v3.12.2-linux-amd64.tar.gzto download binary version of helm zip file.
# tar -zxvf helm-v3.12.2-linux-amd64.tar.gz —
to unzip the downloaded file.
# cp linux-amd64/helm /usr/bin —
to run the Helm command from any terminal session without specifying the full path to the binary.
# helm version —
to check the helm is successfully installed or not.

This is the helm file:
I’m sharing this helm file in the Github : https://github.com/lakshmiyagnanandareddy/apachewebserver

Resourcequota:

Is used to set and enforce the limits for the usage of resources (such as CPU, memory, storage) within a specific namespace, enabling administrators to control the allocation and consumption of resources by pods and containers.
# vi webserverResouceQuota.yaml

In the above yaml I had set limits (such as CPU, Mem, pods) to the default namespace,
Note: usually we can provide values in values.yaml for editing changes easily to the configuration from one place to all yaml files.

Kubernetes Service:

A Kubernetes Service is an abstraction that provides a stable IP address and load balancing for a set of pods, allowing external and internal clients to access them without having to know their individual IP addresses.

# vi service.yaml

In the above yaml file, I have exposed a Kubernetes deployment named ‘web-deployment’ using a Service of type ‘NodePort’. The Service is using a selector with labels to identify the pods belonging to the deployment. The Service is exposed on NodePort: 30500 and maps to port: 80 of the pods, allowing external clients to access the web application running in the pods.

Horizontal Pod Autoscaler:

HPA automatically scales the number of pod replicas based on the CPU Utilization .

In the scaletargetref section of the Horizontal Pod Autoscaler (HPA) configuration, we specify the kind and name of the Deployment to be scaled. The HPA is configured with minReplicas: 2 and maxReplicas: 5, defining the minimum and maximum number of pod replicas allowed. The HPA calculates utilization based on CPU usage to dynamically adjust the number of replicas to meet the demand for the application.

Deployment:

Deployment is a resource object that manages the creation and scaling of pods, it ensures that the desired number of replicas is running at all times.
# vi webdeployment.yaml

In this deployment I’m defining labels as app: webserver , image : nandu9948/apacheserver, containerport: 80.

Readinessprobe:

will ensures the pod health, By performing periodic HTTP checks based on specified paths and ports before pod start’s. If the checks succeed, the pod remains in the ready state, otherwise, it considers the pod unhealthy and it may restarts to maintain the desired state.

In readinessprobe, I have give path as “/” , port as 80, initialdelay as 60s, timeout as 600s, period as 10, failurethresold as 1.

initialdelay: to start probe after container started and completed specified seconds.
timeout: used to define the maximum time in seconds that a probe can take to determine the success or failure of the check.
period: used to run status checks for cyclic-time.
failureThresold: used to define the maximum failure checks to go to pod not-ready state.

livenessprobe:

will ensures the pod health, By performing periodic HTTP checks based on specified paths and ports after pod started. If the checks succeed, the pod remains in the ready state, otherwise, it considers the pod unhealthy and it may restarts to maintain the desired state.

In livenessprobe,I have give path as “/” , port as 80, initialdelay as 50s, timeout as 500s, period as 30, failurethresold as 1.

Requests:

Is used to specify the minimum amount of resources(such as cpu, mem) is required to run this pod.
In Requests, I’ve given memory as 10Mi and CPU as 50m.

Limits:

Used to specify the maximum amount of resources (such as CPU and memory) that a pod is allowed to consume, ensuring the pod is protected from resource exhaustion and preventing it from affecting other pods running on the same node.

I’ve set limits as memory: 15Mi and CPU:90m.

# vi Chart.yaml

This yaml file contains helm file version, name, description.

# vi values.yaml

values.yaml — Helm provides best feature to make all changes of all yaml files from one place(file).
In this file I’ve defined and I can define all configuration details from file using helm.

# helm install webserver helmwebserver/ — I’m deploying all my svc, deployment, quota, hpa yaml files using helm.

successfully deployed:

Below, we can see the apache-web-server is running with ip:30500.

# helm upgrade webserver — used to upgrade helm files after editing the changes.

# kubectl get hpa — used to see list of auto-scaling deployments.

In the targets section we can see that I have defined 60% as target if pod exceeds cpu usage of 60% it will launch one new pod.

Conclusion:

Thank you for joining me on this exciting exploration of Kubernetes deployment. I hope this blog post has provided you with valuable insights and inspiration for your own projects. If you have any questions or feedback, feel free to reach out. Happy deploying!

About the Author:

Name: Lakshmi Yagnananda Reddy Mudireddy
Job Role: Devops Engineer
G-Mail : mlynreddy@gmail.com
GitHub : https://github.com/lakshmiyagnanandareddy
Linked-in : www.linkedin.com/in/lakshmi-yagnananda-reddy-mudireddy
Medium: https://medium.com/@mlynreddy

--

--