⎈ A Hands-On Guide to Kubernetes QoS Classes 🛠️
⇢ Understanding Quality of Service Classes in Kubernetes: A Practical Example
In Kubernetes, managing resources efficiently is crucial for optimizing the performance and stability of applications. One key aspect of resource management is Quality of Service (QoS), which helps prioritize resource allocation among Pods running on a node. In this article, we’ll delve into the concept of QoS in Kubernetes, exploring its importance and how it’s implemented.
Prerequisites
- Kubernetes Cluster
- Kubectl configured
What is Quality of Service (QoS) in Kubernetes?
Quality of Service (QoS) in Kubernetes is a mechanism for prioritizing resource allocation among Pods based on their resource requirements and usage.
Kubernetes defines three QoS classes for Pods:
- Guaranteed
- Burstable
- BestEffort
1. Guaranteed Pods:
Guaranteed Pods have both resource requests and limits specified. These Pods are ensured to receive the requested resources and are not subject to being evicted due to resource shortages. They provide a predictable environment for applications that require a specific amount of resources to function properly.
For a Pod to be given a QoS class of Guaranteed:
- Every Container in the Pod
must havea memory limit and a memory request. - For every Container in the Pod, the memory limit
must equalthe memory request. - Every Container in the Pod
must havea CPU limit and a CPU request. - For every Container in the Pod, the CPU limit
must equalthe CPU request.
Example:
apiVersion: v1
kind: Pod
metadata:
name: guaranteed-pod
spec:
containers:
- name: nginx-container
image: nginx
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "100m"2. Burstable Pods:
Burstable Pods also have both resource requests and limits specified. However, they have the potential to consume resources beyond their requests for short bursts, up to the specified limits. These Pods are subject to being evicted if the node experiences resource contention.
A Pod is given a QoS class of Burstable if:
- The Pod
does not meetthe criteria for QoS classGuaranteed. - At least one Container in the Pod has a memory or CPU request or limit.
Example:
apiVersion: v1
kind: Pod
metadata:
name: burstable-pod
spec:
containers:
- name: nginx-container
image: nginx
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"3. BestEffort Pods:
BestEffort Pods do not specify any resource requests or limits. They are the first to be evicted when the node runs out of resources. These Pods are suitable for non-critical workloads or tasks that can adapt to varying resource availability.
For a Pod to be given a QoS class of BestEffortif:
- The Containers in the Pod
must not haveany memory or CPU limits or requests.
Example:
apiVersion: v1
kind: Pod
metadata:
name: besteffort-pod
spec:
containers:
- name: nginx-container
image: nginxPractical Example
Let’s deploy the all three types QOS class pods.
Step 1: Deploy Guaranteed Pod
Create a file named guaranteed-pod.yaml with the following content:
apiVersion: v1
kind: Pod
metadata:
name: guaranteed-pod
spec:
containers:
- name: nginx-container
image: nginx
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "100m"Apply the Pod Manifest:
$ kubectl apply -f guaranteed-pod.yamlStep 2: Deploy Burstable Pod
Create a file named burstable-pod.yaml with the following content:
apiVersion: v1
kind: Pod
metadata:
name: burstable-pod
spec:
containers:
- name: nginx-container
image: nginx
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"Apply the Pod Manifest:
$ kubectl apply -f burstable-pod.yamlStep 3: Deploy BestEffort Pod
Create a file named besteffort-pod.yaml with the following content:
apiVersion: v1
kind: Pod
metadata:
name: besteffort-pod
spec:
containers:
- name: nginx-container
image: nginxApply the Pod Manifest:
$ kubectl apply -f besteffort-pod.yamlStep 4: Increase Resources
Let’s observe the eviction order of pods in Kubernetes when a worker node is under resource pressure. We’ll start by watching the pods to monitor their eviction sequence. Now we will initiate the stress-ng process on the worker node to put it under pressure(using pod or directly we can run this on worker node by SSH). Since I am dealing with a single-node cluster, we’ll drain the only available node.
Here’s what we’ll do:
- Watch the pods to monitor the eviction order:
$ kubectl get pods -w
NAME READY STATUS RESTARTS AGE
besteffort-pod 1/1 Running 0 49s
burstable-pod 1/1 Running 0 49s
guaranteed-pod 1/1 Running 0 48s
2. In a new terminal, SSH to the worker node and run below command to create resource pressure:
$ apt-get update && apt-get install -y stress-ng
$ stress-ng --cpu 1OR you can use the below manifest file to create stress-ng pod:
apiVersion: apps/v1
kind: Deployment
metadata:
name: high-cpu-utilization-deployment
spec:
replicas: 2
selector:
matchLabels:
app: cpu-utilization-app
template:
metadata:
labels:
app: cpu-utilization-app
spec:
containers:
- name: cpu-utilization-container
image: ubuntu
command: ["/bin/sh", "-c", "apt-get update && apt-get install -y stress-ng && while true; do stress-ng --cpu 1; done"]
resources:
limits:
cpu: "2"
requests:
cpu: "1"As we observe the pod eviction process, we’ll notice that Kubernetes first terminates pods with the BestEffort QoS class, followed by Burstable Pods. Only as a last resort does it evict Guaranteed Pods. This sequence ensures that pods with higher resource guarantees are preserved as much as possible during resource contention.
Cleanup Steps
Run the below commands to cleanup the above setup:
$ kubectl delete -f guaranteed-pod.yaml
$ kubectl delete -f burstable-pod.yaml
$ kubectl delete -f besteffort-pod.yamlSource Code
You’re invited to explore our GitHub repository, which houses a comprehensive collection of source code for Kubernetes.
Also, if we welcome your feedback and suggestions! If you encounter any issues or have ideas for improvements, please open an issue on our GitHub repository. 🚀
Connect With Me
If you found this blog insightful and are eager to delve deeper into topics like AWS, cloud strategies, Kubernetes, or anything related, I’m excited to connect with you on LinkedIn. Let’s spark meaningful conversations, share insights, and explore the vast realm of cloud computing together.
Feel free to reach out, share your thoughts, or ask any questions. I look forward to connecting and growing together in this dynamic field!
Happy deploying! 🚀
Happy Kubernetings! ⎈
