HA Kubernetes with Kubeadm

Nate Baker
6 min readOct 4, 2017

--

*Updated for 1.11*

This post is geared towards users who are already using Kubeadm to deploy their Kubernetes clusters. If you’re not familiar with Kubeadm, check it out here: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

If you’ve referenced this post in the past for 1.7–1.10, a lot has changed with 1.11 and 1.12.

An HA guide has been added to the official Kubernetes documentation: https://kubernetes.io/docs/setup/independent/high-availability/

Background

I’m creating this in hopes that it helps people with their Kubernetes journey. When first starting out, I followed the very popular “Kubernetes The Hardway”guide: https://github.com/kelseyhightower/kubernetes-the-hard-way. It’s a great starting point for those who want to understand the ins and outs of how Kubernetes operates. While its good to know how all of the Kubernetes pieces connect, it can be time consuming.

Kubeadm

Kubeadm is great for standing up a k8s cluster. However it does not support multi-master deployments. It’s something that is being actively worked on, but for now you have to handle high availability on your own. That said, by using kubeadm phases, you are able to configure a highly available cluster in a few steps.

Creating a multi-master cluster

A few servers, a load balancer, and kubeadm are the only tools needed to set this up. It’s configured using the load balancer for node (kubelet), and kubectl communication. You could achieve this setup on a hosting provider, or on prem by leveraging HAproxy. You’ll just need something to load balance traffic for the master nodes. Setup each master node using port :6443 as a backend for your load balancer of choice.

Let’s assume the following:

etcd   1 address      = 10.0.0.6
etcd 2 address = 10.0.0.7
etcd 3 address = 10.0.0.8
master 1 address = 10.0.0.50
master 2 address = 10.0.0.51
master 3 address = 10.0.0.52
load balancer address = 10.0.0.200

Step 1 — Deploy your servers

Before beginning, make sure your servers have been deployed with the necessary prerequisites.

Step 2 — Setup etcd

Skip this step if an etcd cluster is already available.

There are a few different ways to deploy etcd. This article assumes you’ve followed the kubeadm documentation. If so, the etcd nodes will use the kubelet, docker, and a static kube manifest to form their quorum. If you already have a cluster, make the necessary adjustments to the config in the following steps.

The CA created for etcd will be used later when provisioning the master nodes.

Step 3 — Kubeadm config for masters

The kubeadm config allows for customization of the certificates and manifests that are needed when deploying Kubernetes. It should include information about the api endpoints, etcd endpoints/certs, the bootstrap token, and what hostnames/ip addresses should be included in the certificates.

Before you setup the config, generate a token.

kubeadm token generate

Example kubeadm config file:

apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.3
api:
controlPlaneEndpoint: "10.0.0.200:6443"
etcd:
external:
endpoints:
- https://10.0.0.6:2379
- https://10.0.0.7:2379
- https://10.0.0.8:2379
caFile: /etc/kubernetes/pki/etcd/ca.crt
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
apiServerExtraArgs:
apiserver-count: "3"
apiServerCertSANs:
- kubernetes.default
- kubernetes.default.svc.cluster.local
- 10.0.0.200
- 10.0.0.50
- 10.0.0.51
- 10.0.0.52
- master-01-hostname
- master-02-hostname
- master-03-hostname
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: <your token from the step above>
ttl: '0'
usages:
- signing
- authentication

The etcd certs should have been generated during the etcd cluster setup. Be sure to include the additional SANs for yourAPI Server certificate. It needs to include all master addresses AND the load balancer address! Also, take note of the kubeadm api version. The config file may need to change depending on the version of kubeadm and Kubernetes. Save this file as kubeadmcfg.yaml

Step 4 — Distribute the etcd apiserver client certificates

Before distributing the kubeadm config to the master nodes, you must first distribute the etcd certificates. Without them, the kube apiserver will not be able to communicate securely with etcd. Be sure the following files exist on the master nodes:

/etc/kubernetes/pki
├── apiserver-etcd-client.crt
├── apiserver-etcd-client.key
└── etcd
└── ca.crt

They were generated during etcd bootstraping viakubeadm alpha phase certs apiserver-etcd-client --config=/path/to/etcd/kubeadmcfg.yaml

Step 5 — Distribute the kubeadmcfg.yaml

Once the etcd certs are in place, the config should be distributed to each master. The file can be placed anywhere, but this article will assume it exists at /tmp/kubeadmcfg.yaml.

Step 6 — Generate the master CA

Before running kubeadm init, we need to generate the master certificate authority. It will be used to create and sign the certs and keys for the master nodes. This only needs to be done once! The CA will be reused on the other master nodes.

On the first master node, run the following:

kubeadm alpha phase certs ca --config=/tmp/kubeadmcfg.yaml
kubeadm alpha phase certs front-proxy-ca --config=/tmp/kubeadmcfg.yaml
kubeadm alpha phase certs sa --config=/tmp/kubeadmcfg.yaml

The directory tree should look something like this:

/etc/kubernetes
├── pki
│ ├── apiserver-etcd-client.crt
│ ├── apiserver-etcd-client.key
│ ├── ca.crt
│ ├── ca.key
│ ├── etcd
│ │ ├── ca.crt
│ ├── front-proxy-ca.crt
│ ├── front-proxy-ca.key
│ ├── front-proxy-client.crt
│ ├── front-proxy-client.key
│ ├── sa.key
│ └── sa.pub
└── scheduler.conf

Step 7 — Distribute the master PKI

If the CA was created successfully, you should have a directory with all of the kubelet certs and keys in /etc/kubernetes/pki. This directory will need to be distributed to the other 2 master instances. Sync the pki directory to /etc/kubernetes on the other masters.

Step 8 — Initialize the master control plane

When all PKI certs/keys are in place and etcd is running, the master nodes can be initialized. On each master, run the following:

kubeadm init --config /tmp/kubeadmcfg.yaml

On a successful initialization, you’ll have a 3 master nodes that are ready to roll. The /etc/kubernetes directory should look like this:

/etc/kubernetes
├── admin.conf
├── controller-manager.conf
├── kubelet.conf
├── manifests
│ ├── kube-apiserver.yaml
│ ├── kube-controller-manager.yaml
│ └── kube-scheduler.yaml
├── pki
│ ├── apiserver.crt
│ ├── apiserver-etcd-client.crt
│ ├── apiserver-etcd-client.key
│ ├── apiserver.key
│ ├── apiserver-kubelet-client.crt
│ ├── apiserver-kubelet-client.key
│ ├── ca.crt
│ ├── ca.key
│ ├── etcd
│ │ ├── ca.crt
│ ├── front-proxy-ca.crt
│ ├── front-proxy-ca.key
│ ├── front-proxy-client.crt
│ ├── front-proxy-client.key
│ ├── sa.key
│ └── sa.pub
└── scheduler.conf

Double check the manifests to be sure the kubeadm config was properly read. You should see your external etcd information in the kube-apiserver.yaml manifest.

Take note of the join that is output during initialization. It’s needed when adding worker nodes.

kubeadm join --token <YOUR_KUBE_TOKEN> 10.0.0.50:6443 --discovery-token-ca-cert-hash sha256:89870e4215b92262c5093b3f4f6d57be8580c3442ed6c8b00b0b30822c41e5b3

Step 9 — Setup kubectl

If the above steps are complete, and your load balancer is setup to handle the masters, then you should have a highly available master setup! Alright!

Verify the cluster is working by setting up kubectl to communicate with the load balancer.

Setup kubectl like you would normally by copying /etc/kubernetes/admin.conf into ~/.kube/config

Edit ~/.kube/config and change server: 10.0.0.50 to server: 10.0.0.200

Run kubectl get nodes and you should see output similar to:

NAME         STATUS    ROLES     AGE       VERSION
10.0.0.50 Ready master 1h v1.11.3
10.0.0.51 Ready master 1h v1.11.3
10.0.0.52 Ready master 1h v1.11.3

Don’t forget to apply the network overlay, otherwise the Status will display “NotReady”. (I use weave)

Step 10 — Join nodes to the cluster

Now that you have a functional master cluster, join some worker nodes!

On any of your workers, run:

kubeadm join --token YOUR_CLUSTER_TOKEN 10.0.0.200:6443 --discovery-token-ca-cert-hash sha256:89870e4215b92262c5093b3f4f6d57be8580c3442ed6c8b00b0b30822c41e5b3

And that’s it! If everything was setup cleanly, you should now have a highly available cluster. Start joining workers, and take over the world!

Closing Thoughts

Using the kubeadm config file is still in alpha, things will change as kubeadm matures. If you’re running on a cloud provider, like AWS, you can add those things into the config file. Almost anything you can include in static manifests can be added to the kubeadm config. It’s possible to automate all of this, it just takes a little TLC.

Hopefully the process of distributing PKI and leveraging a load balancer becomes more streamlined.

This is just one way to achieve HA with kubeadm. I probably missed a few things, so please feel free to make adjustments and leave suggestions in the comments!

Thanks!

Nate

--

--