“If you stretch beyond your capacity without putting your wisdom into practice, you are bound to break ”
The same applies to our Kubernetes worker nodes. If you will try to schedule a heavy workload to the worker node of low capacity, there is a higher probability of this node filling fast and create inefficiency in the cluster. So as a Kubernetes administrator or developer what you should do?
To understand this scenario let me paint a visual picture of a hypothetical situation below:
In fig 1.0, there are three worker node, NODE 1, NODE 2, NODE 3.
- NODE 1 is a high-end machine which has high storage and memory capacity
- NODE 2 is a medium-end machine with a medium level of storage and memory capacity.
- NODE 3 is a low-end machine with a low level of storage and memory capacity.
Also, there are three pods with
- POD 1 with a Very heavy workload
- POD 2with a Medium level workload
- POD 3 with the lowest workload
Now by default Kubernetes may end up in the situation depicted below in fig 2.0
As shown in fig 2.0, there can be a situation that POD 1 with the heaviest workload may end up being scheduled in NODE 3 which has the lowest capacity and smaller workload pod may end up being scheduled in NODE 1 or NODE 2 for that matter. Which you as an administrator may not want to happen.
So to handle this kind of uncertainties there is a concept of
- Node Selector
- Node Affinity
Let’s get into the details of both solutions
What Is Node Selector & How Does It Work?
As explained above, you may want to schedule a particular type of pod with a particular type of workload to be scheduled only to a particular type of Node selectively.
Generally, such constraints are not required, as our scheduler is intelligent enough to automatically do a reasonable placement to avoid placing the pod on a node with insufficient free resources, but there are some instances where you may want more control on a node where a pod lands, for example
- You may want to ensure that any given pod should end up on a machine with an SSD attached to it
- Or You may desire to co-locate pods from two different services that communicate a lot into the same availability zone.
The node selector is one such mechanism or constraint which we can apply to our pod to ensure that it is placed into a particular type of Node. One has to specify the field
which comes under the Pod’s Spec definition. This field makes use of key-value pairs. Any pod to be eligible for running onto any specific Node has to have the specified key-value pairs as labels of the intended Node.
How to apply nodeSelector?
Step1: First Label the Node :
Get the name of the node
To label, the node, use the following command to get all the nodes you have in your K8S cluster
$ kubectl get nodes
So if you run the above command you will get to see at least one node which will be your master node, as shown below.
Here is my case: the highlighted area depicts our master node. we can look into the details of this node using
$ kubectl describe nodes
You can see that this minikube is our default master node has some pre-defined labels.
Attach Labels To The Node You Want :
Here we only have master node minikube so we will label it, using the command below:
$ kubectl label node minikube size=large
Once you apply this command, a node with node name (minikube )will be labeled with key-value pairs:
“ size=large “
Let’s view the Node label, To See if our label has been mapped successfully, by typing the command:
$ kubectl describe nodes
If you carefully observe the above output, the highlighted area shows that our node minikube has been labeled with key-value pair, size=large
Now that we have labeled our Node, it’s time to create a Pod which will be making use of nodeSelector field to define this key-value pair
Step2: Create a Pod definition and make use of nodeSelector ,
as shown below in the :node-sel-demo.yaml:
- name: nginx
Step3: Create this Pod using the apply command as shown below
$ kubectl apply -f nod-sel-demo.yaml
this will successfully create the pod which has been scheduled to run on our minikube master node based on the nodeSelector , key-value parameter as shown below
Let’s verify if our pod has been scheduled on our master node or not, using the below command:
$ kubectl get pods -o wide
it can be clearly seen by the highlighted area that our pod nginx, has been scheduled on master node minikube.
Node Selector Limitation:
Now that we have learned how to label any specific node and then use that label to bind any particular Pod to be specifically scheduled on this node using nodeSelector(key-value pair),
it is imperative to understand that if you want to run our Pod on a particular node based on the following conditions, like
- size: large or medium
- size: Not Small
This type of logical expression type of selection cannot be achieved by nodeSelector, for this one has to use
“ Node Affinity “
We will look into the details of
in our next section where we will cover
- What Is Node Affinity?
- What is Affinity?
- How Node Affinity Works?
- Example Pod definition to understand nodeAffinity application
Till then, keep reading, keep supporting, and if you all are loving my contribution, don’t forget to follow, clap, and share.
Bye. Bye.. and Take care…….