The Ultimate Guide to On-Perm Kubernetes

Pavel Glukhikh
The Startup
Published in
17 min readJul 18, 2019

When I first dove into Kubernetes back in early 2018, I wondered why anyone would use such a complex (and expensive) service when there are many other ways to host applications such as Vmware products an Hyper-V. A short while ago, I was asked to develop a highly available setup across multiple cloud and on-prem environments, and that’s when it hit me: if you want to enjoy the benefits of Kubernetes in an on-prem datacenter environment, there are ways of doing so while having a highly available environment, cloud-like automation, and the best of all: full and absolute control over the entire stack, from the management infrastructure and hardware, all the way up to the application and how traffic is routed to it.

There are many guides out there on how to set up Kubernetes, some for AWS, some for Azure, and a few of them for on-prem deployments. However, most of the on-prem guides I have seen are either out of date, inaccurate, or are not optimal for a production environment. In this guide, I will describe, in detail, how to set up a production ready on-prem Kubernetes environment with automation and all features Kubernetes has to offer. As always, I accept comments and suggestions on how to make this setup better. This guide was inspired by a post I read on inkubate. It covers much of the same steps, but in a more clear way, updated for 7/2019 and fixes a lot of errors (the original does not have the proper steps to set up the cluster as of 2019 due to breaking changes).

Before we get into it, let’s answer the following questions:

  • Why would you want to set up a Kubernetes HA cluster in your on-prem datacenter or hosted systems?
  • What goals do we want to accomplish with this setup?

First, let’s look into why you would want an on-prem Kubernetes cluster:

  • If you already have on-prem virtualization systems in your facility, or own a datacenter and do not want to (or do not want to YET) go to the cloud.
  • If you would like to leverage docker, CI/CD, microservices, or a managed Kubernetes solution on-prem.
  • Clouds are expensive. If you would like to have a full Kubernetes solution that is fully manageable but do not want to spend the large amounts of money required for most clouds.
  • If you are deploying a hybrid solution for HA or flexibility
  • If you would like to have a full-featured Kubernetes sandbox or test environment without having to pay for cloud resources.
  • If you are learning Kubernetes and want to know how it works “under the hood”.

Next, let’s define the goals we want to accomplish with this deployment:

  • Everything from the infrastructure and VMs, to the control plane must have at least N+2 redundancy.
  • The deployment must be both horizontally and vertically scalable with ease and automation.
  • The process for adding new nodes to the cluster must be seamless and single click automated.
  • The existing virtualization environment should be leveraged without having to make complex or breaking changes.
  • The solution should have business continuity in mind, and must be able to be recovered in the event of a disaster.
  • The deployment must meet current Kubernetes and VMware standards.
  • The deployment must work in cloud environments as well as on-prem.
  • The deployment must be federation-ready.

Now, let’s get into the actual deployment. This is the deployment overview. We will preform the deployment in five steps:

  1. Configure and set up the Vmware environment and automation.
  2. Use the automation to build the Kubernetes cluster.
  3. Configure the cluster.
  4. Post configuration tasks.
  5. Securing the cluster.

What you will need for this deployment:

The deployment consists of 8 servers:

  • 3 Kubernetes master nodes
  • 3 Kubernetes worker nodes
  • An HAProxy load balancer node
  • A client machine to run the automation and manage the cluster

It’s a good idea to create a key to match the IPs in this guide to your own:

Load Balancer -192.168.2.51 /

Master 1- 192.168.2.52 /

Master 2- 192.168.2.53 /

Master 3–192.168.2.54 /

Worker 1 -192.168.2.55 /

Worker 2–192.168.2.56 /

Worker 3–192.168.2.57 /

The machines will all run Ubuntu 18.04, however I have gotten this to work on 16.04 as well (with the same steps). The virtual hardware requirements will vary depending on how many resources you want to allocate to your cluster and what the cluster’s workload will be. For my environment (which will be production), I gave each worker and load balacner node 3 GB of RAM, 4 vCPUs, and ~a 80GB HDD. For each worker, I used 4–6 GB of RAM, 4vCPUs, and a 120–200 GB HDD. Remember that for most virtualization platforms, you can re-size disks, and add / subtract resources as you need to.

Now let’s look at the hypervisor. My deployment runs on a Vmware Vsphere managed ESX Enterprise cluster consisting of 9 nodes, 20 TB of storage, 700 GB of RAM, SSD acceleration, graphics co-processors, and about 120 CPUs. Is that overkill for this deployment? Yes. Obviously you don’t need all of this to run this deployment. If you are just starting out, I would recommend using a high-end PC, gaming computer, or a small ESX cluster. Basically, anywhere you can run Vmware Vpshere server, you can run this deployment.

Note: VSphere server is required for the automation part. If you are running just ESX or some other hypervisor, the deployment will still work but you will need to set up each node by hand or use a manual VM cloning process. Eventually, I want to make the automated deployment process work cross-platform, but for now, we will be using VSphere server.

Now, the fun begins. Let’s login to the VSphere UI and starting deploying the cluster. This guide assumes that yo already have a properly configured VMware VSphere cluster that is ready for VM deployment. If not, look for my upcoming post on how to set one up.

The Client Machine — Install and Configure

First, you will need to create a standard Ubuntu 18.04 server machine. We will use this machine for setting up automation and kubectl. The machine can have minimal specs, but I would recommend at least 1 GB of RAM, 2 CPUs and an 80 GB disk.

When setting up the machine, the only pre-requisite is that it will need access to the network that the Kubernetes nodes will be on. Otherwise, the default install options can be used (hostname, username, etc). For packages, you will only need the standard system utilities and sshd at this time.

It’s assumed that you know how to deploy an Ubuntu machine. If not, have a look here.

Installing Kubeadm Tools

Next up, we need to install the certificate generator, kubectl to manage Kubernetes and Terraform to set up the other VMs.

First, let’s install Cloud Flare SSL:

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64# Add the executable permission to the package
chmod +x cfssl*
# Move everything to /usr/local/binsudo mv cfssl_linux-amd64 /usr/local/bin/cfsslsudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

Once, everything is done, let’s verify the cfssl version:

cfssl version

If everything was completed correctly, there should be a version output in the string returned by cfssl.

Next, let’s install kubectl:

1- Download the binary:

wget https://storage.googleapis.com/kubernetes-release/release/v1.12.1/bin/linux/amd64/kubectl

2- Make it executable:

chmod +x kubectl

3- Move the binary to /usr/local/bin:

sudo mv kubectl /usr/local/bin

4- Verify the version of kubectl: (note: this will throw an error at this time)

kubectl version

Installing Automation Tools and Setting up the Template

Next, we will install the automation tools on the client machine and build the template VM that Terraform will use. We will be using HashiCorp Terraform to clone an existing VM template to create the Kubernetes cluster.

Template VM Setup

First, you will need to set up a new Ubuntu 18.04 VM with 4 GB of RAM, a 120 GB HDD, and 4 vCPUs. You can change these specs in the deployment configuration later if you want. The VM should only have the standard system utilities and sshd on it at this time.

The name of the template should be ubuntu-18.04-terraform-template

Note: it is a good idea to create the VM from scratch and run a clean installation. Don’t reuse the client VM.

You can also install any management tools you need on the template VM at this time (snmpd, monitoring agents, etc). Make sure that these do not interfere with Kubelet or etcd services.

Important: you will need VMware tools on all of the VMs, so let’s deploy the package to the template:

1- Download the VMware tools onto your template VM (you have to create a VMware account if you don’t already have one). To make it easier, you can SCP the package to the template VM from your machine.

2- Decompress the archive.

tar xvzf VMware-Tools-core-whatever version.tar.gz

5- Mount the VMware tools ISO.

sudo mount -o loop /tmp/linux.iso /mnt

6- Extract the VMware tools installer.

tar xvzf /mnt/VMwareTools-whatever version.tar.gz -C /tmp

7- Unmount the VMware tools ISO.

sudo umount /mnt

8- Remove the open-vm-tools package. We will be replacing it with the full install of Vmware tools.

sudo apt-get remove --purge open-vm-tools

9- Install the VMware tools (leave all the options as default).

cd /tmp/vmware-tools-distrib
sudo ./vmware-install.pl

10- Reboot the machine.

sudo reboot

After reboot, we can clean up the VM:

1- Remove the temporary network configuration.

sudo rm /etc/netplan/50-cloud-init.yaml# Note: the config file name may sometimes be different. 

2- Prevent cloud config from preserving the hostname.

sudo nano /etc/cloud/cloud.cfg
...
preserve_hostname: true
...

3- Power off the virtual machine.

sudo shutdown now

Now, you should have a fully configure VM that is ready to be converted into a template.

Note: Make sure the VM adapter’s network configuration is set to a network that can access the rest of the Kubernetes nodes.

Installing Automation Tools

Next, let’s go back to the client machine and set up Terraform:

1- Download Terraform from the HashiCorp repo.

wget https://releases.hashicorp.com/terraform/0.11.14/terraform_0.11.14_linux_amd64.zip

2- Unzip the archive.

unzip -e terraform_0.11.14_linux_amd64.zip

3- Copy the binary into your path.

sudo cp terraform /usr/bin

Note: Terraform is multi-platform. In theory, you should be able to do this from a Mac or Windows machine as well.

We are going to use a script developed by sguyennet to clone the VMs.

Note: I absolutely HATE vim. Editing in this tutorial is done by via nano.

Let’s set up the script:

1- Clone the script repository:

git clone https://github.com/sguyennet/terraform-vsphere-standalone.git

2- Terraform init:

cd terraform-vsphere-standaloneterraform init

3- Configure the deployment of your HAProxy machine:

nano terraform.tfvars

In here, you will need to set the following:

  • The Vsphere server URL, user details, datacenter, etc (block 1)
  • If you have a self signed Vsphere cert, set unverified SSL to true.
  • In block 2 (VM Parameters), you will need to fill out the details of you VM. You can customize the virtual hardware at this time as well. (See above for spec recommendations).
  • You will need 7 different VM names and IPs. Pick the IP and name you want your HAProxy machine to have at this time.
  • Leave the domain blank unless you have an internal domain name that ALL VMs in the cluster can see.
  • Enter the name you gave to your template VM in the previous step ( ubuntu-18.04-terraform-template is default)
  • Leave linked clone as false unless you know what you are doing.
  • I recommend using sequential IPs for the cluster nodes to make things easier. You will need 8 in total for the entire cluster. Remember that the IPs should be on a network that you client machine can reach (and your LAN can reach as well if you want external access to your Kubernetes services).
  • Save the file after you are done editing.

5. Do an ls in the folder and make sure you don’t see a terraform.tfstate or tfstate.backup file. If you do, delete it.

4- Deploy the virtual machine. This will be your HAProxy node.

terraform apply

This should start the deployment of your HAProxy machine. Note: if you mess up, you can always do terraform destroy.

Creating additional VMs

We can now create all of the other VMs. Remember, for this guide, we will need one load balancer, 3 masters, and 3 workers.

Go back into your tfvars file:

nano terraform.tfvars

Change the machine virtual hardware specs, hostname, and IP for each run. You will need to repeat this step for each of the VMs you need to create.

For each step, edit the tfvars file, and do

terraform apply

Important: delete the terraform.tfstate and terraform.tfstate.backup files AFTER EACH RUN. Otherwise, each apply operation will replace the previous operation.

(I’m planning on correcting this in the future).

Remember, you can only use terraform destroy right after each run. When you delete the tfstate file, you wipe Terraform’s memory and will not be able to auto-revert.

Once you have completed all of the run operations, you should now have a blank series of nodes that are ready for installation of Kubernetes components.

(You can also keep the template and use it for your other deployments using Terraform or some other tool.)

It is a good idea to test access to each node to make sure they are up and accessible.

Installing HAProxy

Now, let’s set up HAProxy on the LB node.

1- SSH to your HAProxy node.

2- Update the VM:

sudo apt-get updatesudo apt-get upgrade

3- Install HAProxy:

sudo apt-get install haproxy

4- Configure HAProxy to load balance the traffic between the three Kubernetes master nodes:

nano /etc/haproxy/haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
# An alternative list with additional directives can be obtained from
# https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend kubernetes
bind 192.168.2.51:6443
option tcplog
mode tcp
default_backend kubernetes-master-nodes
backend kubernetes-master-nodes
mode tcp
balance roundrobin
option tcp-check
server k8s-master-0 192.168.2.52:6443 check fall 3 rise 2
server k8s-master-1 192.168.2.53:6443 check fall 3 rise 2
server k8s-master-2 192.168.2.54:6443 check fall 3 rise 2

Your config may look different depending on the HAProxy version. The only things that need to be changed here are the IPs. Change the bind IP to the local IP of your HAProxy node. Change the master 0–2 IPs of thier local IPs.

For this guide, I used 192.168.2.51 for the HAProxy machine, and 52–54 for the master nodes.

5- Restart HAProxy:

sudo service haproxy restart

Check that HAProxy has started successfully:

sudo service haproxy status

Generating Master Node Certificates for Kubeadm

Next, we will need to go back to the client VM to generate the TLS certificates via the CloudFlare SSL tool.

1- Create the certificate authority configuration file:

nano ca-config.json
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}

2- Create the certificate authority signing request configuration file:

nano ca-csr.json
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "IE",
"L": "Acme",
"O": "Kubernetes",
"OU": "Anvil",
"ST": "Acme Co."
}
]
}

Change the details in the config file to your own.

3- Generate the certificate authority certificate and private key:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

4- Verify that the ca-key.pem and the ca.pem were generated:

ls -la

Next, we generate the certificate and and private key.

1- Create the certificate signing request configuration file:

nano kubernetes-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "IE",
"L": "Cork",
"O": "Kubernetes",
"OU": "Kubernetes",
"ST": "Cork Co."
}
]
}

2- Generate the certificate and private key:

cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=192.168.2.51,192.168.2.52,192.168.2.53,192.168.2.54,127.0.0.1,kubernetes.default \
-profile=kubernetes kubernetes-csr.json | \
cfssljson -bare kubernetes

Note: you will need to change the hostname= IPs to your own.

3- Verify that the kubernetes-key.pem and the kubernetes.pem file were generated.

ls -la

4- Copy the certificate to each of the nodes in the cluster:

scp ca.pem kubernetes.pem kubernetes-key.pem pavel@192.168.2.51:~
scp ca.pem kubernetes.pem kubernetes-key.pem pavel@192.168.2.52:~
scp ca.pem kubernetes.pem kubernetes-key.pem pavel@192.168.2.53:~
scp ca.pem kubernetes.pem kubernetes-key.pem pavel@192.168.2.54:~
scp ca.pem kubernetes.pem kubernetes-key.pem pavel@192.168.2.55:~
scp ca.pem kubernetes.pem kubernetes-key.pem pavel@192.168.2.56:~
scp ca.pem kubernetes.pem kubernetes-key.pem pavel@192.168.2.57:~

Note: Change the username and IPs to your own.

Installing Docker

1- SSH to the first machine:

ssh pavel@192.168.2.51

2- Elevate to root:

sudo su

3- Add the Docker repository key:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -

4- Add the Docker repository:

add-apt-repository \
"deb https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable"

5- Update package lists:

apt update

6- Install Docker:

apt install -y docker-ce

Installing Kubernetes components:

1- Add the Google repository key:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

2- Add the Google repository:

nano /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io kubernetes-xenial main

3- Update package lists:

apt update

4- Install kubelet, kubeadm and kubectl:

apt install kubelet kubeadm kubectl

5- Disable the swap.

swapoff -ased -i '/ swap / s/^/#/' /etc/fstab

Important: Swap must be disabled on boot. Otherwise, the cluster will break on reboot.

6. Disable swap from starting at boot:

nano /etc/fstab# Comment out the second line mentioning swap
# Save the file

Important: This process needs to be repeated for each of the Kubernetes nodes in the cluster (all masters and worker nodes only).

Install and Configure the Etcd Services on the Master Nodes

Next we will install and configure the Etcd key value store services on each master node.

1- SSH to the first master node:

ssh pavel@192.168.2.52

2- Create a directory for Etcd configuration:

sudo mkdir /etc/etcd /var/lib/etcd

3- Move the certificates to the configuration directory: (you need to do this as the user you used to copy the certificates to the machine, not as root, as they are in the user’s home dir).

sudo mv ~/ca.pem ~/kubernetes.pem ~/kubernetes-key.pem /etc/etcd

4- Download the etcd packages:

wget https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz

5- Extract the etcd archive:

tar xvzf etcd-v3.3.9-linux-amd64.tar.gz

6- Move the etcd binaries to /usr/local/bin:

sudo mv etcd-v3.3.9-linux-amd64/etcd* /usr/local/bin/

7- Create an etcd systemd unit file:

sudo nano /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \
--name 192.168.2.52 \
--cert-file=/etc/etcd/kubernetes.pem \
--key-file=/etc/etcd/kubernetes-key.pem \
--peer-cert-file=/etc/etcd/kubernetes.pem \
--peer-key-file=/etc/etcd/kubernetes-key.pem \
--trusted-ca-file=/etc/etcd/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ca.pem \
--peer-client-cert-auth \
--client-cert-auth \
--initial-advertise-peer-urls https://192.168.2.52:2380 \
--listen-peer-urls https://192.168.2.52:2380 \
--listen-client-urls https://192.168.2.52:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://192.168.2.52:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster 192.168.2.52=https://192.168.2.52:2380,192.168.2.53=https://192.168.2.53:2380,192.168.2.45=https://192.168.2.54:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

Change the IPs to your own. — name will be the local IP of the first master node. Same for the advertise and broadcast URLs. The initial-cluster IPs are the IPs of the master nodes.

8- Restart the service:

sudo etcd restart

9- Enable etcd to start at boot time:

sudo systemctl enable etcd

10- Check that etcd has started successfully:

sudo service etcd status

Note: If you have trouble getting etcd to start, review the machine syslog. A common issue is a typo in the config file or wrong IPs, or wrong syntax.

Repeat this process for the other two master nodes only.

Remember to update the IPs for each node in the config file.

Initialize the Master Cluster

Now it’s time to initialize the three master nodes.

1- SSH to the first master node:

ssh pavel@192.168.2.52

2- Create the configuration file for kubeadm:

nano config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
apiServerCertSANs:
- 192.168.2.51
controlPlaneEndpoint: "192.168.2.51:6443"
etcd:
external:
endpoints:
- https://192.168.2.52:2379
- https://192.168.2.53:2379
- https://192.168.2.54:2379
caFile: /etc/etcd/ca.pem
certFile: /etc/etcd/kubernetes.pem
keyFile: /etc/etcd/kubernetes-key.pem
networking:
podSubnet: 10.30.0.0/24
apiServerExtraArgs:
apiserver-count: "3"

Note: At the time of writing, apiVersion: kubeadm.k8s.io/v1beta2 is a working API version.

Note: Replace IPs with your own. The control plane endpoint and the certSANs are both the HAProxy machine IPs. The endpoints are the master node IPs. I used 10.30.0.0/24 for the pod network. You can replace this IP with your own if you like.

3- Initialize the machine as a master node:

sudo kubeadm init --config=config.yaml

4- Copy the certificates to the two other master nodes:

sudo scp -r /etc/kubernetes/pki pavel@192.168.2.52:~
sudo scp -r /etc/kubernetes/pki pavel@192.168.2.53:~

5- On the other two master nodes, create the config.yaml file (don’t change anything, make it the same as the first master).

Then run:

sudo kubeadm init --config=config.yaml

On the other two master nodes.

Save the join command that is displayed after initialization.

Initializing the Worker Nodes

1- SSH to the first worker node:

ssh pavel@192.168.2.55

2- Execute the join command that you saved from the previous step:

sudo kubeadm join 192.168.2.51:6443 --token [your_token] --discovery-token-ca-cert-hash sha256:[your_token_ca_cert_hash]

Change the IP to your own HAProxy node.

Repeat this step for all remaining worker nodes.

Verify Cluster Nodes

Verify that all nodes have joined the cluster successfully:

1- SSH to the first master node:

ssh pavel@192.168.2.52

2- Get the node list:

sudo kubectl --kubeconfig /etc/kubernetes/admin.conf get nodes
NAME STATUS ROLES AGE VERSION
k8s-kubeadm-master-0 NotReady master 1h v1.12.1
k8s-kubeadm-master-1 NotReady master 1h v1.12.1
k8s-kubeadm-master-2 NotReady master 1h v1.12.1
k8s-kubeadm-worker-0 NotReady 2m v1.12.1
k8s-kubeadm-worker-1 NotReady 1m v1.12.1
k8s-kubeadm-worker-2 NotReady 1m v1.12.1

Note: The nodes show as not ready because networking has not yet been configured.

Configure Kubectl on the Client Machine

Now we need to configure kubectl on the .50 client machine so that we can manage the Kubernetes cluster. Note: this can be done on any machine that is capable of running kubectl.

1- SSH to one of the master nodes

ssh pavel@192.168.2.52

2- Add permissions to the admin.conf file.

sudo chmod +r /etc/kubernetes/admin.conf

3- Copy the configuration file to the client machine and change back the permissions:

cd /etc/kubernetes
scp admin.conf pavel@192.168.2.50:/ .
sudo chmod 600 /etc/kubernetes/admin.conf

4- Go back to the client machine and create the kubectl configuration directory:

mkdir ~/.kube

5- Move the configuration file to the configuration directory:

mv admin.conf ~/.kube/config

6- Modify the permissions of the configuration file:

chmod 600 ~/.kube/config

7- Check connectivity to the Kubernetes cluster API from the client machine:

kubectl get nodes

You should get a list of the node in the cluster with status of not ready.

Setting Up Networking

Weavenet will be used as the overlay network for this guide. If you plan to use an F5 device for ingress, Flannel is recommended.

1- From the client machine, deploy the overlay network pods:

kubectl apply -f https://git.io/weave-kube-1.6

2- Check that the pods deployed correctly:

kubectl get pods -n kube-system

3- Check that the nodes are in ready state:

kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-kubeadm-master-0 Ready master 18h v1.12.1
k8s-kubeadm-master-1 Ready master 18h v1.12.1
k8s-kubeadm-master-2 Ready master 18h v1.12.1
k8s-kubeadm-worker-0 Ready 16h v1.12.1
k8s-kubeadm-worker-1 Ready 16h v1.12.1
k8s-kubeadm-worker-2 Ready 16h v1.12.1

Note: It may take a few minutes for all nodes to change to a ready state.

You should now have a fully functioning Kubernetes cluster.

Installing the Add-Ons

Kubernetes has several useful addons. One of these is the dashboard.

Kubernetes Dashboard

1- Create the dashboard:

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/alternative/kubernetes-dashboard.yaml

2- Delete the dashboard service. We will be using an alternate setup:

kubectl delete service kubernetes-dashboard -n=kube-system

3- Expose the dashboard:

kubectl expose deployment kubernetes-dashboard -n=kube-system -type=LoadBalancer -name=kubernetes-dashboard -external-ip=192.168.2.52 -port=8443

This will expose the Kubernetes dashboard on http://192.168.2.52:8443

Change the IP to your own.

You can also use alternate methods to run the dashboard: https://github.com/kubernetes/dashboard/wiki/Installation

You should now have a working Kubernetes dashboard.

Now, let’s login. On the login page, select token.

On the client machine:

4 - Get the list of dashboard secrets:

kubectl get secrets -n=kube-system | grep dashboard

5- Describe the secret that is named kubernetes-dashboard-token-xxxxx

kubectl describe secret kubernetes-dashboard-token-xxxxx -n=kube-system

6- Copy and paste the secret onto the Kubernetes dasboard login screen and click login. You should now be able to access the dashboard.

And that’s it. You should now have a complete and fully functional on-prem Kubernetes cluster.

What do you mean it doesn't work?

Occasionally, you may run into some issues during cluster setup. Not to worry, I have posted the solutions to the most common issues below.

  • Logs are your friend. If a service does not start or something weird is going on, always check the log.
cat /var/log/syslog

Is a good way to start.

  • Have an IP key. Many of the example files in this guide are just that, examples. Check them over twice and make a table comparing my IPs to your own, as described in the first section.
  • Is swap disabled? etc does not like swap. Make sure to follow the steps to disable it and keep it disabled upon reboot.
  • Work backwards from the problem. Which component is affected? Why is it affected? Is there a config file that the service needs? Etc.
  • Ask questions. I value comments and questions and I hang out on Teamspeak and Discord. Ask me stuff (but try and solve it yourself first).
  • It’s probably vim. Don’t use vim. Use nano.
  • Check out my post on troubleshooting Kubernetes:

Thanks for reading. Please comment if you have questions or if you find an issue.

~Pavel

--

--

Pavel Glukhikh
The Startup

Leader in technology, consulting, and cyber security fields. CEO / founder of 2 tech startups. Astrophysics / cosmology / engineering ultra-enthusiast.