Taints and Tolerations in Kubernetes

Steven Hough
Technology Hits
Published in
4 min readJun 14, 2021

Node Affinity is a property of Pods that attracts them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite as they allow a node to repel a set of pods.

Taints and Tolerations Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Tolerations are applied to pods, and allow the pods to schedule onto nodes with matching taints. Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node, this marks that the node should not accept any pods that do not tolerate the taints.

Image by mosquito1 from Pixabay

Let’s understand with a mosquito & human analogy.

Assume a person doesn’t want a mosquito to bite him, in that case he makes use of mosquito repellant. Wherever on his body, when he applies mosquito repellant that part becomes “Tainted”. The mosquito is non-tolerant to this type of taint, so whenever it will come closer to the tainted body part, it will not be able to tolerate it and will fly away. So, the body here is “Tainted” and Mosquito is “Non-Tolerant”.

However, if another type of bug which is tolerant to the applied taint (repellant) comes, it will be able to sit on his body. So, in this case the body is “Tainted” and the other bug is “Tolerant”. With this analogy, we can understand that some types of nodes are tainted to repel some non-tolerant type of pods, these pods will not be attaching to such nodes, while some other type of pods which are tolerant to the tainted worker node, will be able to reside in that node.

In the above scenario, node is the person on which the bug wanted to sit and it’s generally tainted. Pod is the Bug/Mosquito itself. Pod here is generally made tolerant.

Let’s understand with the figure given below:

The following figure show how taint and toleration works. Assume that we have dedicated resources on node one for a particular application. So, we would like only those pods that belong to this application to be placed on node one. First, we prevent all pods from being placed on the node by placing a taint on the node, let’s call it red.

By default, pods have no tolerations which means unless specified, otherwise, none of the pods can tolerate any taint. So, in this case, none of the pods can be placed on node one, as none of them can tolerate the taint red. From those unwanted pods are going to be placed on this node.

Next, we have to enable certain pods to be placed on this node. For this, we must specify which pods are tolerant to this particular taint. In this situation, we add a tolerance to pod and it will only be tolerant to red now. So, when the scheduler tries to place this pod on node one, it goes through nodes, now only node 1 can accept pods that can tolerate the taint red. So, with all the taints and tolerations in place, this is how the pods would be scheduled. Therefore, it is important to remember that taints are set on nodes and tolerations are set on pods.

Adding taint using kubectl taint command will have the following syntax:

kubectl taint nodes nodename key=value:taint-effect

If we simplify the above syntax;

  • taint: is the command to apply taints in the nodes
  • nodes: are set of worker nodes
  • nodename: is the name of the specific worker node, on which taint has to be applied, it has a key-value pair
  • key-value pair: it is used to specify which application type in the pod will this node may be attached
  • taint-effect: This is used to define how the pod will be treated if they are not tolerant of the taint.

The effects are as below;

  1. NoSchedule — Pods will not be schedule on the nodes
  2. PreferNoSchedule — The system will try to avoid placing a pod on the node, but it’s not guaranteed
  3. NoExecute — New pods will not be scheduled on the node and existing pods on the node if any will be evicted if they do not tolerate the taint

These pods may have been scheduled on the node before the taint was applied to the node.

kubectl taint nodes node1 app=red: NoSchedule

According to the above syntax, NODE 1 is tainted with a pod with key-value pair app=red, where pods are mapped with a taint-effect type: NoSchedule. It means no pods will be scheduled on NODE 1 until it has matching tolerations. Together, these three features give software engineers a vast amount of flexibility in scheduling their workload on a Kubernetes cluster. By tainting a node and adding a toleration and node selector, users can target their workloads to specific hardware, schedule certain workloads to not interfere with each other, or any number of other use cases.

--

--

Steven Hough
Technology Hits

Software Engineer and Blogger: Code Creator and Word Weaver