Setting up Highly Available Kubernetes Cluster

Kubernetes Advocate
AVM Consulting Blog
5 min readJul 12, 2020
HA K8s Cluster

Here it describes how to run a cluster in multiple zones on AWS Platform

Introduction

Kubernetes 1.2 adds support for running a single cluster in multiple failure zones, AWS calls them “availability zones”, Multizone support is deliberately limited: one or more Kubernetes cluster will run in multiple zones

Functionality

When the nodes area unit started, the kubelet itself adds labels to them with zone data.

Kubernetes can mechanically unfold the pods during a replication controller or service across nodes during a single-zone cluster (to cut back the impact of failures.)

With multiple-zone clusters, this spreading behavior is extended across zones (to cut back the impact of zone failures.) (This is achieved via SelectorSpreadPriority).

When persistent volumes are created, the PersistentVolumeLabel admission controller mechanically adds zone labels to them.

The scheduler (via the VolumeZonePredicate predicate) can then make sure that pods that claim a given volume are placed into the identical zone as that volume, as volumes cannot

Volume limitations

The following limitations are self-addressed with topology-aware volume binding.

  • StatefulSet volume zone spreading once-dynamic provisioning is presently not compatible with pod affinity or anti-affinity policies.
  • If the name of the StatefulSet contains dashes (“-”), volume zone spreading might not give an even distribution of storage across zones.
  • When specifying multiple PVCs during a readying or Pod spec, the StorageClass must be organized for a particular single zone, or the PVs have to be compelled to be statically provisioned during a specific zone. Walkthrough

We’re currently attending to rehearse fitting and employing a multi-zone cluster on each GCE & AWS. To do so, you remark a full cluster (specifying MULTIZONE=true), so you add nodes in extra zones by running Kube-up once more (specifying KUBE_USE_EXISTING_MASTER=true).

Bringing up your cluster

Create the cluster as traditional, however, pass MULTIZONE to inform the cluster to manage multiple zones making nodes in us-central1-a.

AWS:

curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a NUM_NODES=3 bash

AWS: This step brings up a cluster as traditional, still running during a single zone (but MULTIZONE=true has enabled multi-zone capabilities).

Nodes are labeled

View the nodes you’ll be able to see that they’re labeled with zone info. They are bushed us-central1-a (GCE) or us-west-2a (AWS) thus far. The labels are failure-domain.beta.kubernetes.io/region for the region, and failure-domain.beta.kubernetes.io/zone for the zone:

kubectl get nodes --show-labels

The output is similar to this:

Add more nodes in a second zone

Let’s add another set of nodes to the existing master, running in a very completely different zone (us-central1-b or us-west-2b).

We run Kube-up once more, however by specifying KUBE_USE_EXISTING_MASTER=true Kube-up won’t produce a replacement master

On AWS we also need to specify the network CIDR for the additional subnet, along with the master internal IP address:

KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2b NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.1.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh

View the nodes again; 3 more nodes should have launched and be tagged in us-central1-b:

kubectl get nodes --show-labels

The output is similar to this:

Volume affinity

Create a volume using the dynamic volume creation (only PersistentVolumes are supported for zone affinity):

Now let’s validate that Kubernetes will itself labeled the zone & region the PV was created in.

kubectl get pv --show-labels

The output is similar to this:

Output3.yaml

So now we will create a pod that uses the persistent volume claim. Because GCE PDs / AWS EBS volumes cannot be attached across zones, this means that this pod can only be created in the same zone as the volume:

Note that the pod was automatically created in the same zone as the volume, as cross-zone attachments are not generally permitted by cloud providers:

kubectl describe pod mypod | grep NodeNode: kubernetes-minion-9vlv/10.240.0.5

And check node labels:

kubectl get node kubernetes-minion-9vlv --show-labelsNAME STATUS AGE VERSION LABELS kubernetes-minion-9vlv Ready 22m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv

Pods are spread across zones

Pods in a replication controller or service will itself spread across zones. First, let’s launch more nodes in a third zone:

AWS:

KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2c NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.2.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh

Verify that you now have nodes in 3 zones:

kubectl get nodes --show-labels

Create the guestbook-go example, which includes an RC of size 3, running a simple web app:

find kubernetes/examples/guestbook-go/ -name '*.json' | xargs -I {} kubectl apply -f {}

The pods should be spread across all 3 zones:

kubectl describe pod -l app =guestbook | grep Node
kubectl get node kubernetes-minion-9vlv kubernetes-minion-281d kubernetes-minion-olsh --show-labels

Load-balancers span all zones in a cluster; the guestbook-go example includes an example load-balanced service:

kubectl describe service guestbook | grep LoadBalancer.Ingress

The output is similar to this:

LoadBalancer Ingress: 130.211.126.21

Set the above IP:

export IP=130.211.126.21

Explore with curl via IP:

curl -s http://${IP}:3000/env | grep HOSTNAME

The output is similar to this:

"HOSTNAME": "guestbook-44sep",

Again, explore multiple times:

(for i in `seq 20`; do curl -s http://${IP}:3000/env | grep HOSTNAME; done)  | sort | uniq

The output is similar to this:

  "HOSTNAME": "guestbook-44sep",
"HOSTNAME": "guestbook-hum5n",
"HOSTNAME": "guestbook-ppm40",

The load balancer correctly targets all the pods, even though they are in multiple zones.

👋 Join us today !!

️Follow us on LinkedIn, Twitter, Facebook, and Instagram

If this post was helpful, please click the clap 👏 button below a few times to show your support! ⬇

--

--

Kubernetes Advocate
AVM Consulting Blog

Vineet Sharma-Founder and CEO of Kubernetes Advocate Tech author, cloud-native architect, and startup advisor.https://in.linkedin.com/in/vineet-sharma-0164