This article brings you to set up a highly available cluster by using Kubernetes on your servers. Also gonna talk about hardware requirements and prerequisites which need to run kubernetes.
This generic installation which needs to apply for each server that gonna use for cluster.
One or more machines running one of:
- Ubuntu 16.04+
- Debian 9
- CentOS 7
- RHEL 7
- Fedora 25/26 (best-effort)
- HypriotOS v1.0.1+
- Container Linux (tested with 1800.6.0)
Minimal required memory & CPU (cores)
- Master node’s minimal required memory is 2GB and the worker node needs minimum is 1GB
- The master node needs at least 1.5 and the worker node need at least 0.7 cores.
To manage your cluster you need to install kubeadm, kubelet and kubectl.
kubeadm: the command to bootstrap the cluster.
kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
kubectl: the command line tool to talk to your cluster.
Before installing these packages there are some prerequisites to be completed.
let’s go step by step until install kubeadm, kubelet and kubectl. These are the setups which need to follow.
- Configure IP Tables
- Disable SWAP
- Install Docker & configure
- Install Kubeadm-Kubelet & Kubectl
- Create Default Audit Policy
- Install NFS Client Drivers
Step 01: Configure IP Tables
Kubernetes has recommended setting net.ipv4.ip_forward is to 1. because traffic being rerouted incorrectly due to bypassing of iptables.
$ sysctl -w net.ipv4.ip_forward=1$ sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/g' /etc/sysctl.conf$ sudo sysctl -p /etc/sysctl.conf
Using this command above you can permanently set iptable to 1.
Step 02: SWAP OFF
The reason behind off swap on the server is kubelet to work normally.
$ swapoff -a$ sed -i '2s/^/#/' /etc/fstab
Step 03: Install Docker
First, you should update your package list on your OS. Here I’m using Ubuntu.
$ apt-get update
After installing useful packages which make thing easier. You can check the benefits of each package by googling.
$ apt-get update && apt-get install apt-transport-https \
ca-certificates curl software-properties-common -y
Next, add Docker’s GPG key. So basically download Docker GPG with the key and then add this package key into GPG.
Using GPG which helps the open source world guarantee that software artifacts are the real deal and come from who we think they come from
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
|This called a pipeline, which gets an output of one command and runs it as the input of another
apt-key addadds a package key to downloaded GPG package.
- You can verify it by running
apt-key listCommand and also you can validate this fingerprint.
Now we can install Docker. So first we add a stable repository, update the repo and install docker community edition.
$ add-apt-repository \"deb [arch=amd64] https://download.docker.com/linux/ubuntu \$(lsb_release -cs) \stable"$ apt-get update && apt-get install -y docker-ce
Then create a daemon.json file and set up some configurations into it. Here we mention storage driver as overlay2 because overlay2 is the preferred storage driver, for all currently supported Linux distributions, and requires no extra configuration. overlay2 is the preferred storage driver for Docker 18.06 and older.
Also, include log related configuration and define the maximum size of log-file.
cat > /etc/docker/daemon.json <<EOF
After reloading docker daemon again, restart docker and enable it
$ mkdir -p /etc/systemd/system/docker.service.d$ systemctl daemon-reload$ systemctl restart docker$ systemctl enable docker
Step 04: kubeadm, kubelet and kubectl
As the final step, we gonna setup kubernetes.
Setup GPG of Google Cloud. I already mentioned above what GPG and usage in short. hope you understand and I do same here for Google Cloud.
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
Add source repository,
$ cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
Now we can kubeadm, kubelet and kubectl. so run these commands below to install and configure correctly.
$ apt-get update$ apt install kubernetes-cni -y$ apt-get install kubelet kubeadm kubectl -y$ apt-mark hold kubelet kubeadm kubectl$ systemctl daemon-reload$ systemctl restart kubelet
kubeadm will not install or manage
kubectlfor you, so you will need to ensure they match the version of the Kubernetes control plane you want kubeadm to install for you. If you do not, there is a risk of a version skew occurring that can lead to unexpected, buggy behaviour. However, one minor version skew between the kubelet and the control plane is supported, but the kubelet version may never exceed the API server version. For example, kubelets running 1.7.0 should be fully compatible with a 1.8.0 API server, but not vice versa (source : https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)
Step 05: Create Default Audit Policy
Audit policy defines rules about what events should be recorded and what data they should include. The audit policy object structure is defined in the
audit.k8s.io API group. When an event is processed, it’s compared against the list of rules in order. The first matching rule sets the “audit level” of the event. The known audit levels are:
None- don’t log events that match this rule.
Metadata- log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body.
Request- log event metadata and request body but not response body. This does not apply for non-resource requests.
RequestResponse- log event metadata, request and response bodies. This does not apply for non-resource requests.
source regarding creating default audit policy: Read for more knowledge
$ mkdir -p /etc/kubernetes$ cat > /etc/kubernetes/audit-policy.yaml <<EOF
- level: Metadata
Create a folder to save audit logs.
$ mkdir -p /var/log/kubernetes/audit
Step 06: Install NFS Client Drivers
nfs volume allows an existing NFS (Network File System) share to be mounted into your Pod. Unlike
emptyDir, which is erased when a Pod is removed, the contents of an
nfsvolume are preserved and the volume is merely unmounted. This means that an NFS volume can be pre-populated with data, and that data can be “handed off” between Pods. NFS can be mounted by multiple writers simultaneously. (source : https://kubernetes.io/docs/concepts/storage/#nfs)
$ sudo apt-get update$ sudo apt-get install nfs-common
Here I have added the shell script to install everything that we need to setup Kubernetes cluster. You just need to download this file and run on each your server which you gonna use for Cluster.
Thanks for reading!
Next article brings you, How to setup Kubenetes cluster easily
If you like, Feel free to clap for this article that makes me happy. :D :D
Did you find this guide helpful? Make sure to subscribe to my newsletter so you don’t miss the next article with useful deployment tips!