Building Highly Available applications on Kubernetes — Basic Adminstrative Guide

Suhas Chikkanna
3 min readMay 7, 2018

--

When building your applications, you want them to be highly available. Kubernetes provides you with a set of features to make your applications highly available across node failures, zone failures, voluntary and involuntary disruptions. Below we talk about the features that ensures high availability of your application. Let us understand these features based on the sample nginx deployment below.

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.13
ports:
- containerPort: 80

Congratulations, if you have a workload(deployment, statefulset etc.,) like the above, because you have already made your application Highly Available. If you ask me how? I would say, technically speaking you have made your application highly available by specifying your “replicas” count to more than 1 i.e, replicas:3 as mentioned in the above the deployment.

Readiness and Liveness Probe

Readiness probe checks if your application(or process) in the container is ready to serve the requests. If yes, only then the service requests are sent to the container. If the readiness probe fails then the service requests are not sent to the container. Note that if a pod contains multiple containers then all the containers should be marked ready, only then the pod is considered to be ready and is eligible to receive request, otherwise not.

Liveness probe is similar to the Readiness probe except for the fact that, Liveness probe checks if your application(or process) in the container is alive or not. If the Liveness probe fails after n number of times, then the container is restarted. Below is an example of Liveness and Readiness probe that can be added at the path ‘spec.template.spec.containers.’ in the yaml file of a deployment like above (nginx deployment).

readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
periodSeconds: 5

Pod Disruption Budget(PDB)

Create a Pod disruption budget(PDB) for your application to make it Highly Available across voluntary disruptions in Kubernetes. A PDB specifies the number of replicas that an application can tolerate having, relative to how many it is intended to have. For example, the Nginx Deployment intends to have 3 replicas since we have mentionedspec.replicas: 3 . And suppose along with this deployment we implement a PDB named nginx-pdb as shown below, then the voluntary disruption allows deletion of one, but not two pods at a time, in order to respect the PDB at any given point in time. An example of a PDB for the above deployment is as follows:-

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: nginx-pdb
spec:
maxUnavailable: 1
selector:
matchLabels:
app: nginx

Note:- PDBs cannot prevent involuntary disruptions from occurring, but they do count against the budget.

Pod Affinity and Pod Anti-affinity

Another interesting way to make your application highly available across node failures, zone failures and regional failures, is by using the feature ‘Pod Affinity and Pod Anti-Affinity’. Pod affinity ensures that a given pod is scheduled along with another pod based on a particular criteria, whereas Pod Anti-affinity makes sure that a given pod is not scheduled along with another pod based on a particular criteria. For instance an example of Pod Anti-affinity is as follows, if you want the pods of your deployment like above (nginx deployment) to be spread based on a criteria “no two pods with same labels(in our case the label is app: nginx which is depicted in the matchExpressions below) run on same node”, then you would use the following configuration in “spec.template.spec” path of the deployment yaml.

spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname

In the above configuration, topologyKey specified is used to spread the pod across nodes. A list of topologyKey values can be found here. You can also spread the pods based on zones and region using the topologyKey above, thereby making your application Highly available across zone and regional failures.

Conclusion

In this blog, we discussed about how we can make application highly available in kubernetes using the features/abstractions provided by kubernetes which include liveness/readiness probe, Pod Disruption Budget(PDB), Pod Affinity and Pod-Anti Affinity.

--

--

Suhas Chikkanna

GKE(Kubernetes) | Kafka | Docker | GCP | Cloud | Senior Devops Engineer at Shield, India..