Configure Kubernetes on premise

How to install and configure Kubernetes on your own servers and create your custom cluster

Ani Sinanaj
Jan 28 · 9 min read

This is the simplest design you can configure Kubernetes on (except minikube). It is not meant to be for production use as it does not offer high-availability on the cluster. More on the reasons later.

Two words on Kubernetes

Just because it’s getting a lot of hype lately, it doesn’t mean that it is the right technology for your use case or that your company is ready to adopt it.

Kubernetes is not a way to lower costs on itself, in fact you’ll probably spend more on servers. What it does though, is guarantee a high availability for your projects, meaning that it will be hard for them to have downtime even if you deploy a bugged software. It needs to be configured correctly though.

If you can, opt for a cloud based K8s solution. They offer a lot of complicated parts that otherwise you’d have to configure yourself. (read: control plane high availability, network, storage, autoscaling on nodes)

Some situations when it might come in handy:

  • Cluster management

Deploying K8s on a single server may be too much. It is best when you have multiple servers to deploy it on.

The parts that make Kubernetes are:

  • kubeadm or the control plane, made of etcd controller manager scheduler api server

The design I’m explaining here requires about 3 servers. One for kubadm, and two for kubelet.

Cloud VS On-Premise

All major players offer Kubernetes on their platforms. Microsoft has AKS (Azure Kubernetes Service), Amazon offers EKS (Elastic Kubernetes Service) and Google has GKE (Google Kubernetes Engine).

There are some other providers that offer it too such as Digital Ocean which introduced it recently.

Although I’m a fan of AWS, for a K8s cluster I’d suggest Google Cloud because it has the most complete implementation.

Generally all these providers will offer out of the box these features:

  • High availability on the Control Plane which you don’t even pay

You can have all of the above with an on-premise solution by combining virtualisation and somewhat more powerful servers while still having a

Design

Infrastructure architecture

Let’s stick with a simple design like this and analyse it. With my knowledge at the time, this is what I had come up with, considering to scale the worker nodes in case it was necessary.

So there’s a Master node where basically we’ll install kubeadm out of the box. And there are 3 nodes dedicated to app deployment.

Master

Configuring the master node is very simple, especially considering that this part is well documented. You only need to install a few software packages on the host system such as Docker, Kubelet, Kubeadm, Kubectl and their dependencies.

To install Docker these are the commands needed to be executed. Remember that the official supported Docker community edition version for Kubernetes is the 18.06. To check the available versions for the exact version code run apt-cache madison docker-ce

sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg2 \
    software-properties-commoncurl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -sudo apt-key fingerprint 0EBFCD88sudo apt-key fingerprint 0EBFCD88sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/debian \
   $(lsb_release -cs) \
   stable"apt-get udpate
apt-get install docker-ce=18.06.1~ce~3-0~debian

The following code installs Kubernetes components instead.

apt-get update && apt-get install -y apt-transport-https curlcurl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOFapt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

The above snippets should be executed on all the servers.

There some other configuration requirements for Kubernetes.

# The swap should be turned off
swapoff -a# Remove the line that contains swap from /etc/fstab
nano /etc/fstab# Check that all servers have different mac addresses
ip link# Check that all servers have a different product ID
sudo cat /sys/class/dmi/id/product_uuid

The only thing left to do is to bring up the master and that’s done with the simple command kubeadm init don’t execute this yet though.

Networking

Kubernetes has some core components. These components are standard but for the whole system to work some third party components are needed. These pieces communicate to each other through interfaces. For example the container engine used isn’t necessarily Docker, it can be any software implementing the Container Runtime Interface or CRI.

Networking is a difficult concept in this context. Consider the diagram above again. We have 3 servers where eventually our applications will go.

Example of 2 containers on different servers

Let’s say I want to deploy a database (MySQL) and a php application on my cluster. Kubernetes puts them where it thinks they fit best. This means we can have all sorts of combinations like having the database on server Node 1 and the application on server Node 2. Now my php container needs to communicate to the mysql container but it isn’t in the same server. We need a way to make it seem to my containers that they’re in the same local network and communicate to each other. For this purpose Kubernetes exposes the Container Network Interface (CNI).

Network namespaces

The CNI is supposed to create a virtual network on each server node and make it possible for each container to reach the others on the same IP space like 10.244.0.0/16. When a container tries to reach another container it will ping an IP such as 10.244.0.1, the CNI will translate this IP to the node IP and then to the container address inside that node.

Available CNIs include different features. But generally they add some overhead to networking. It is possible to define your own networking through iptables rules. This isn’t difficult and doesn’t add overhead, but it’s a manual job that is manageable only if you have a few servers.

The documentation is quite well written and there are many articles explaining this issue with clusters. Here is a well written article that explains networking more in detail https://medium.com/google-cloud/understanding-kubernetes-networking-pods-7117dd28727

This is a great article that classifies all the CNIs and analyses their features. https://itnext.io/benchmark-results-of-kubernetes-network-plugins-cni-over-10gbit-s-network-36475925a560

One of the simplest to use is Flannel, it offers basic connectivity features and doesn’t add much overhead.

Now that we have some knowledge on networking we’re ready to execute kubeadm init and we’ll add to it, the POD network space through the flag --pod-network-cidr=10.244.0.0/16 If you choose another CNI, you have to check the documentation on what network space to give it, for example calico uses 192.168.0.0/16.

# - Initializing kubernetes master with a 
#   network space ready for flannel
# - Add --dry-run and write the output to a file to have
#   an idea of what configurations the installation will use
kubeadm init --pod-network-cidr=10.244.0.0/16 --dry-run >> init.log# Finally run the following to initialise the node
kubeadm init --pod-network-cidr=10.244.0.0/16

Worker Nodes

Technically the Kubernetes cluster is up but it only has a master server. What’s missing is the worker nodes. The output of the kubeadm init command will show you the join command which can be used on any server exposed on the internet as long as it has kubeadm kubelet kubectl docker installed.

If for some reason you cleaned the command output and don’t have the join command, you don’t need to panic. It can be generated again.

kubeadm token create --print-join-command# The output will be something like below
kubeadm join 10.10.10.1:6443 --token vpo60f.pk0x3fhrnzhvr2sy --discovery-token-ca-cert-hash sha256:8d55ca78560cb926d2953e14042dd3de40137234a4b51698894db0f43849aa97

It’s easy as that, running the code on the other nodes will set them up. We can monitor it like this.

watch -n 1 kubectl get nodes -o wide --namespace kube-system

The command below instead will show the state of the components.

watch -n 1 kubectl get pods -o wide --namespace kube-system

There you might notice a couple of pods name core-dns that seem to not be working. That’s because we told Kubernetes to get ready on the network space for Flannel by adding --pod-network-cidr=10.244.0.0/16 but didn’t really “install” Flannel yet.

The installation of Flannel (or any other CNI) generally means deploying it to the cluster.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Done… Not yet

So the cluster is up and running. Trying to deploy an application, like the one I mentioned before, wouldn’t work though. There’s another problem with having an application living in multiple servers and that is storage.

Generally all of the applications will work with files, some will get generated, some will get deleted and a few need to be saved persistently like the files a database generates.

Out of the box, Kubernetes gives you the possibility to save files in the Pod context, which means that other Pods can’t access them, but containers in the same Pod could. It also means that the files remain if the Pod restarts but they will be deleted if the Pod itself is deleted. Thus we need a better way to save files. We have two requirements, the files need to be accessible by multiple Pods possibly in different servers and they need to be persistent.

Design of a deployment

Not focusing on how good the application design is, let’s see what are the problems of this deployment.

So I told Kubernetes to deploy a php application that generates pdf files, stores the “generated” status on the database and then renders them. Then I also deployed a MySQL database. I said, I want 2 replicas of the php application and 1 of my database.

Now consider the php application generating a file, storing to the database the fact that it generated that particular file and then rendering it.

When I ask again for that pdf file but this time, I ask my other php container on the second Pod, the application will check the database which will tell it that the file has already been generated, but when php looks on the filesystem, it won’t find anything.

This issue is not easy to explain and you realise the gravity only when deploying an application and scaling it up.

To solve this Kubernetes came up with something called CSI, Container Storage Interface.

Storage

When an application defines a storage in Kubernetes, it asks for a PVC, which translates to Persistent Volume Claim. It’s like telling the cluster that I, the application, need this much persistent storage. When this happens, Kubernetes will activate a storage Provisioner and allocate that storage to the application. There may be different storage provisioners. For each provisioner, there will be a Storage Class.

This makes sense because there are different kinds of storage thus different storage provisioners. Each one will create it’s own kind of storage.

There must always be a default Storage Class. Application deployments don’t have to define it always.

To allocate the storage, the provisioner has a few options such as being bound to a file server like Ceph, GlusterFS or others.

Ceph and GlusterFS though, are clusters of their own. They can be installed on the same servers where the Kubernetes cluster is running or on other servers completely. The second option is strongly advised.

I won’t get too into the storage servers on this article but consider two things when it comes to storage.

  • Persistent means persistent. The software I mentioned above offer replication.

I’ll eventually write another article to explain better how to set up a storage cluster.

Production

This cluster is not yet production ready for three reasons.

  • Persistent Storage configuration

Special thanks to Adriano Pezzuto for his support.

Thanks for reading this far and stay tuned for more fun articles :)

175

175 claps
Ani Sinanaj

Written by

#tourist #nomad | https://progress44.com