Installing Kubernetes from binaries: Pt.1 — Preparing your cluster

Containerum
Containerum
Published in
5 min readJul 24, 2018
Image by Joseph Barrientos via Unsplash

by Dmitry Krasnov

Hi!

We are starting a series of articles on how to install Kubernetes from binaries in your cluster. We are going to install the Kubernetes package from the Containerum repository. This is a production-ready Kubernetes package that has been rigorously tested by our engineers over a series of iterations. Now that it is stable we can finally share it with the community!

This guide is organized as a series of step-by-step instructions that will allow you to install Containerum Kubernetes and other components necessary for a high-performance cluster.

Note: there are surely other ways and other components you can use to setup a Kubernetes cluster, but this is a set of instructions that we have followed ourselves and can say with confidence— it works!

To put it short — if you follow guide, you will get a production-ready cluster (we promise).

But before proceeding to installation, let’s discuss VM requirements and recommended cluster configuration.

Hardware recommendations

When it comes to selecting the number of nodes, consider the purpose of the cluster. While you can use one node to try to setup a Kubernetes cluster in demo mode for yourself, it is recommended to use at least 3 nodes — that’s the configuration we will use in the guide. You can use virtual or physical machines with some version of Linux with the x86_64 architecture. In this guide we will use CentOS.

To run apiserver and etcd you will need a machine with 2 cores and 2GB RAM for small and medium cluster. Larger or more active clusters may require more cores. Worker nodes should have enough resources to host your applications and can have different configuration. Note: each node must have a unique hostname.

etcd hardware recommendations

You will need to launch one or more instances of etcd. But it is strongly recommended to run an odd number of etcd instances, because a 3-instance cluster will have quorum while 2 instances are alive, 5 instances will have quorum while 3 instances are alive, 7=>4, etc.

CPU

etcd deployments are usually not CPU bound — two to four cores should do. For heavily loaded etcd deployments it is recommended to use eight to sixteen cores since etcd can serve many requests from memory.

RAM

For good performance etcd will usually need 8GB. In case of heavy workload you will need to allocate around 16GB to 64GB RAM.

Disks

The slower a disk, the higher the latency — this is critical when it comes to etcd, since it will write metadata to a log each second. If these actions take too long, it can threaten the stability of the cluster. In order to run an efficient etcd cluster, typically 50 sequential IOPS (e.g., a 7200 RPM disk) to 500 sequential IOPS are required. Disk bandwidth is not that much important, but the more the bandwidth, the faster a failed member will recover. We recommend going for SSD disks.

Cluster model

For high availability you will need to decide where to host your etcd cluster. A cluster should be composed of at least 3 members. It is recommended to stick to one of the following models:

  • Hosting etcd cluster on separate VMs.
  • Hosting etcd cluster on the master nodes.

While the first option provides better performance and hardware isolation, it requires additional expenses and maintenance.

Option 1: create 3 virtual machines that conform to the recommendations above. We will refer to them as etcd0, etcd1 and etcd2.

Option 2: just replace etcd0, etcd1 and etcd2 with master0, master1 and master2 in the next steps accordingly.

Configuration examples

Disable SELinux

To install Kubernetes on CentOS it is necessary to disable SELinux.

Open the SELinux /etc/selinux/config file and change SELINUX= value to disabled:

SELINUX=disabled

then reboot:

$ sudo reboot

Set Variables

Throughout this guide we will be using different variables in configuration files. Please read throug the list and refer to it each time you are in doubt what this or that variable means.

Also don’t forget to configure the network and set Containerum repo in advance as described below.

IP addresses

  • KUBERNETES_PUBLIC_IP is an IP address of Kubernetes load balancer in a public network. In the case of only one node it is equal to the master node’s EXTERNAL_IP value.
  • EXTERNAL_IP is an IP address of an instance in external network.
  • INTERNAL_IP is an IP address of instance in internal network.
  • MASTER_NODES_IP is a sequence of all IP addresses of master nodes. In the case of only one node it is equal to the master node’s EXTERNAL_IP value.
  • ETCD_NODE_IP is an IP address of the etcd node. In case of multiple etcd nodes they can be declared as ETCD_NODE_1_IP, ETCD_NODE_2_IP, etc.
  • POD_CIDR is the range of IP addresses for pods.

Hostnames

  • HOSTNAME is the hostname of the node.
  • NODE_NAME is the name of the node. In most cases it is equal to HOSTNAME.
  • ETCD_NAME is the hostname of the instance, on which etcd has been installed.

Network information

It is necessary to ensure that all cluster hosts can communicate by hostname. It will be sufficient to add the following entries to /etc/hosts on each node:

192.168.0.4 master
192.168.0.5 node-1
192.168.0.6 node-2

Set a separate hostname for each node. For the node with the master role and name set:

hostnamectl set-hostname master

Do the same for the worker nodes node-1 and node-2.

Configure the network interfaces for public and private networks:

  • public eth0:
BOOTPROTO=none
DEFROUTE=yes
DEVICE=eth0
GATEWAY=192.168.0.1
IPADDR=192.168.0.2
MTU=1500
NETMASK=255.255.255.0
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
  • private eth1:
BOOTPROTO=none
DEVICE=eth1
IPADDR=10.0.10.1
MTU=1500
NETMASK=255.255.255.0
ONBOOT=yes
TYPE=Ethernet
USERCTL=no

Add Containerum RPM repository

Put this in /etc/yum.repos.d/exonlab.repo:

[exonlab-kubernetes110-testing]
name=Exon lab kubernetes repo for CentOS
baseurl=http://repo.containerum.io/centos/7/x86_64/
skip_if_unavailable=False
gpgcheck=1
repo_gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-ExonLab
enabled=1
enabled_metadata=1

GPG package signing key

Add our package signing key to your package manager. Run:

curl -O https://repo.containerum.io/RPM-GPG-KEY-ExonLab
sudo mv RPM-GPG-KEY-ExonLab /etc/pki/rpm-gpg/
sudo chown root:root /etc/pki/rpm-gpg/RPM-GPG-KEY-ExonLab

Key fingerprint: 2ED4 CBD2 309F 2C75 1642 CA7B 4E39 9E04 3CDA 4338

That’s it for today! Now you are ready to install Kubernetes and all components necessary for a production-ready cluster. In the next article we will configure certificates for each node.

Leave your feedback and ask questions, we’ll be glad to help!

Also don’t forget to follow us on Twitter and join our Telegram chat to stay tuned! You might also want to check our Containerum project on GitHub. We need you feedback to make it stronger — you can submit an issue, or just support the project by giving it a ⭐. Your support really matters to us!

Containerum is your source of expertise in Kubernetes.

--

--

Containerum
Containerum

Containerum Platform for managing applications in Kubernetes.