How to deploy a highly-available application on Kubernetes
Kubernetes is one of the most used container orchestrator systems in modern times. Major cloud providers (AWS, Azure, GCP, DigitalOcean) have adopted it and developed a managed service out of it. So it is no longer news to hear the name Kubernetes or K8s being used to manage and scale container-based applications.
But using Kubernetes goes beyond setting it up and deploying pods to it. Many of the rich features in Kubernetes that make applications more resilient and highly available are not just one thing, but a combination of different processes and configurations put together. From how to deploy an application without downtime, to spreading pods to ensure they are properly distributed across nodes. These are the bullet points of configurations and techniques we shall be talking about in this article:
- Pod Replicas
- PodAntiAffinity
- Deployment Strategy
- Graceful Termination
- Probes
- Resource Allocation
- Scaling
- PodDisruptionBudget
The first in the series of methods we shall talk about is:

