High Performance With Pod Affinity

airasia super app
AirAsia MOVE Tech Blog
5 min readApr 29, 2021

By Nalina Madhav C and Sundar Sudandiraraj

As an Online Travel Agent, airasia the Asean super app collaborates with strategic partners to offer flight itineraries to over 3,000 destinations across 700 airlines.

As product engineers managing inventory from multiple partners, our intent is to ensure that the relevant workloads are co-located thereby improving the performance of the micro-services and avoiding poor resource utilization.

Pod affinity is a set of rules that tells the Kubernetes Scheduler to place two or more pods with the matching labels to a suitable node. i.e if a node contains a pod with the matching label, then the scheduler can schedule the new pod with the matching label to this node.

Node Pool Affinity tuning is useful for the following scenarios:-

  1. Micro-service A that utilizes more resources such as CPU or memory and Micro service B that utilizes less CPU or memory. Kubernetes allows us to define constraints telling it to schedule Micro-service A to a node with high resource availability and and Micro-service B to a node with less resource availability.
  2. Two or more micro-services may perform best when they exist close together. Kubernetes allows us to define constraints telling it to schedule two or more micro-services to the same host, zone etc.

There are tons of other scenarios too where affinity rules may come handy.

Two ways to define a rule

  1. requiredDuringSchedulingIgnoredDuringExecution — Hard rule where only if the constraints are met, Scheduler can assign the pod to a node
  2. preferredDuringSchedulingIgnoredDuringExecution — Soft rule where if the Scheduler could not enforce the constraints, the pod will still be assigned to a node that does not match the constraints.

Objective

To compare and analyse the performance and resource utilization by applying pod affinity to three microservices with the following topology domains.

  1. Same node
  2. Same zone

In Flights OTA, search-aggregator, flights-service and kiwi-repository microservices communicate with each other. Hence pod affinity rules were applied to these three microservices to improve the performance and reduce the response latency.

Co-locating microservices within the same host

Add the following to those microservices that needs to be co-located.

STEP 1: Add a new label in Kustomization.yaml

commonLabels:
app.kubernetes.io/part-of: “searchxp-pack”

STEP 2: Add Affinity rule in deployment.yaml

affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
— weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
— key: app.kubernetes.io/part-of
operator: In
values:
— searchxp-pack
topologyKey: kubernetes.io/hostname

Note

  1. key and value within “matchExprerssion” should match what’s defined in kustomization.yaml.
  2. topologyKey should not be empty
  3. To co-locate another set of microservices, define a different label name which will be used by this set of microservices.
  4. Line 3 will vary depending upon the type of rule you wish to apply either hard rule or soft rule.

STEP 3: Verify the assignment of pods to the nodes

ssh into bastion host and run the below command

kubectl get pods -o wide — selector=app=<service-name> -n <namespace>

Illustration 1.1

From the above image, we can infer that kiwi-repository and flights-service exist in the same nodes while search-aggregator could not be assigned to the same node where the other two services exist. Search-aggregator pods are still scheduled to different nodes due to the soft rule -preferredDuringSchedulingIgnoredDuringExecution configuration.

commonLabels:
app.kubernetes.io/part-of: “searchxp-pack”

STEP 2: Add Affinity rule in deployment.yaml

affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
— weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
— key: app.kubernetes.io/part-of
operator: In
values:
— searchxp-pack
topologyKey: topology.kubernetes.io/zone

Note:

  1. key and value within “matchExpression” should match what’s defined in kustomization.yaml.
  2. topologyKey should not be empty
  3. To co-locate another set of microservices, define a different label name which will be used by this set of microservices.
  4. Line 3 will vary depending upon the type of rule you wish to apply either hard rule or soft rule.

STEP 3: Verify the assignment of pods to the nodes

ssh into bastion host and run the below command

kubectl get pods -o wide — selector=app=<service-name> -n <namespace>

Illustration 1.2

From the above image, we see that only search-aggregator and kiwi-repository service were scheduled to the nodes of “zone b” while all the three services were scheduled to the nodes of “zone a”

Conclusion

From Illustration 1.1 and Illustration 1.2, we can infer that when node topology (hostname topologyKey) was applied, we were able to locate at least two different services within the same node while when zone topology was applied, some nodes had the redundant services co-located.

.

--

--

airasia super app
AirAsia MOVE Tech Blog

The product and people stories behind ASEAN’s fastest growing travel and lifestyle super app.