How to Bootstrap Kubernetes the hard way!

Yair Etziony
Feb 18 · 8 min read

What is the Purpose of this document:

This tutorial is the second in a series of blog posts that will guide someone with a good system administrator to have basic understandings of K8s. I would like to install Kubernetes on VMs using minimal tools and use the instructions as a reference, this could come handy if you want to get the certificate for K8s administrator.

What is Kubernetes:

Why everyone is talking about this?

This provides much of the simplicity of Platform as a Service (PaaS) with the flexibility of Infrastructure as a Service (IaaS) and enables portability across infrastructure providers. Kubernetes, aka K8s, is an open-source cluster manager software for deploying, running and managing containers at scale.

It lets developers focus on their applications, and not worry about the underlying infrastructure that delivers them.

Kubernetes can run in any environment, for example, on-premise bare-metal cloud, or AWS. Docker is widely used nowadays for container run time environment, but K8s can use any kind of container (rkt for example).

Why did I choose kubeadm and not Minikube or Google Kubernetes Engine?

The problem is that you will miss a lot of knowledge derived from the pain of failure. I wanted something else, I wanted to understand how I can bootstrap a cluster myself and with this tutorial you will deploy a cluster and will understand and configure the network, it might be a bit harder at the start, but at the end, I think it will be much more beneficial for you.

But why do we need to install Docker? Aren’t we installing Kubernetes?


Some basic concepts:


A Pod encapsulates an application container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. All the containers in a pod can reach each other on localhost.

pods need to be able to communicate with other pods whether they are running on the same localhost or separate hosts.

Let’s imagine the following network,, so the router is and we have two instances that are and respectively. Given this setup, each instance can communicate with the other via eth0. That’s how the node network works, but what about the Pods? Kubernetes assigns an overall address space for the bridges on each node, and then assigns the bridges addresses within that space, based on the node the bridge is built on.

Kubernetes adds routing rules to the gateway at telling it how packets destined for each bridge should be routed, which node’s eth0 the bridge can be reached through. This combination of virtual network interfaces, bridges, and routing rules is usually called a pod network.

Kubernetes uses CNI (Container Network Interface) to manage and operate the pod network, there are a couple of external modules such as Calico, Flannel, and Canal to name few. I choose Calico, it looked easy to configure and I had no problems with it. For our purpose, it’s important to understand that you will first need to configure the node network, and then the pod network.

Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):

  • all containers can communicate with all other containers without NAT
  • all nodes can communicate with all containers (and vice-versa) without NAT

What this means in practice is that you can not just take two computers running Docker and expect Kubernetes to work. You must ensure that the fundamental requirements are met.

Before you proceed to take time to think about the subnet and CIDR you will use for the pod network, since we are using containers, we need to understand that a node will have their own IP over the k8s internal network. You can’t just run `curl localhost:port` when your application is running on a pod, you will need to check your k8s cluster for its internal network.

You define the pods network when you use `kubeadm init`

After you configure the internal network, you can configure the load-balancer or ingress so your application can be reached from the internet. This is out of the scope for this tutorial.

What we want to achieve:

  • One Kubernetes master
  • Two Kubernetes worker nodes
  • A working node network
  • A working pod network


  • basic knowledge in Linux operating system.
  • basic knowledge in networking and subnetting and routing.
  • basic knowledge in containers (mainly Docker).
  • Basic knowledge in c groups and namespacing.
  • basic knowledge in HTTP and usage of curl.

Before you proceed:

  • please decide on the master node, make sure to label your machine adequately. The hostname should represent the role of the machine in the cluster.
  • please make sure that all your nodes can ping each other (the best would be to set them up in one subnet)
  • due to the fact that K8S needs a fine amount of open ports, we would suggest that you will deploy your nodes in a DMZ or a security group in AWS.

What do we want to install:

kubeadm performs the actions necessary to get a minimum viable cluster up and running. By design, it cares only about bootstrapping, not about provisioning machines.


The kubelet is the primary “node agent” that runs on each node. The kubelet works in terms of a PodSpec. A PodSpec is a YAML or JSON object that describes a pod. The kubelet takes a set of PodSpecs that are provided through various mechanisms (primarily through the API server) and ensures that the containers described in those PodSpecs are running and healthy.

Kubelet doesn’t manage containers which were not created by Kubernetes


Kubectl is a command line interface for running commands against Kubernetes clusters.

Before you begin:

Ubuntu 16.04+

Debian 9

CentOS 7


Fedora 25/26 (best-effort)

HypriotOS v1.0.1+

Container Linux (tested with 1800.6.0)

We decided to go with Ubuntu, just pick any Linux distribution that you want!

Each machine should have:

  • 2 CPUs or more
  • Full network connectivity between all machines in the cluster
  • Disable swapfile

Open the following ports on your master node:

TCP Inbound 2379–2380

TCP Inbound 10250

TCP Inbound 10251

TCP Inbound 10252

Open the following ports on your worker nodes:

TCP Inbound 30000–32767

Install CRI (Container Runtime Interface):

We decided to go with Docker, but mind you there are other options.

Install Docker CE

Set up the repository:

Update the apt package index

apt-get update

Install packages to allow apt to use a repository over HTTPS

apt-get update && apt-get install apt-transport-https ca-certificates curl software-properties-common

Add Docker’s official GPG key

curl -fsSL | apt-key add -

Add docker apt repository

add-apt-repository \“deb [arch=amd64] \$(lsb_release -cs) \stable”

Install docker ce

apt-get update && apt-get install docker-ce=18.06.0~ce~3–0~ubuntu

Setup daemon

cat > /etc/docker/daemon.json <<EOF{“exec-opts”: [“native.cgroupdriver=systemd”],“log-driver”: “json-file”,“log-opts”: {“max-size”: “100m”},“storage-driver”: “overlay2”}EOFmkdir -p /etc/systemd/system/docker.service.dUpdate and restart dockersystemctl daemon-reloadsystemctl restart docker

Verify the MAC address and product_uuid are unique for every node

run the following on each machine:

ip asudo cat /sys/class/dmi/id/product_uuid

Install the required software:

Install curl

apt-get update && apt-get install -y apt-transport-https curl

Add apt-key for google repo

curl -s | apt-key add -

add the correct info

cat <<EOF >/etc/apt/sources.list.d/kubernetes.listdeb kubernetes-xenial mainEOFInstall the softwareapt-get updateapt-get install -y kubelet kubeadm kubectlapt-mark hold kubelet kubeadm kubectl

You can install the latest version of Docker CE for the purpose of this tutorial, kubeadm will send an error message, but installation will work!

Init the cluster, by running on the master node the following command:

kubeadm init — pod-network-cidr

You can choose any network you subnet you want, make sure that the CIDR and subnet make sense. Remember that this is how you set up the pod network.

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

Copy the command with the token, you will need it to connect the nodes to the cluster for example:

kubeadm join <master node ip>:6443 — token 16z6a5.z6csgg27mbiaf9sy — discovery-token-ca-cert-hash sha256:9f242479a67996d8307a1adfe19a620bd54f24de21473a88ab733f6c13307who am I5b0

run the command on the worker nodes

and check out the output

kubectl get nodesNAME STATUS ROLES AGE VERSIONworker-1 NotReady <none> 15m v1.13.2worker-2 NotReady <none> 15m v1.13.2master-node Ready master 38m v1.13.2

if all your nodes could connect — rejoice! you have now a running k8s cluster! you can put a small sticker on your laptop.

Notice that both the worker nodes are in “NotReady”, this means that you did not configure an internal network yet. which is fine.

if you can’t connect — check your firewall config or security group if you work with AWS.

Let’s configure the internal network:

There are many options for networking but we choose Calico, as defined in their site: “Calico enables networking and network policy in Kubernetes clusters across the cloud. Calico works everywhere — on all major public cloud providers and private cloud as well”.

curl \ \ -O

open the calico.yaml with a text editor, and change the following in the calico.yaml:


In the ConfigMap named calico-config, locate the typha_service_name, delete the none value, and replace it with calico-typha.

kubectl apply -f calico.yamlkubectl get nodesNAME STATUS ROLES AGE VERSIONworker-1 Ready <none> 19m v1.13.2worker-2 Ready <none> 18m v1.13.2master-node Ready master 42m v1.13.2

All the nodes should be ready!

You have configured the internal network for K8S! you can now rejoice again

You can put another bigger K8S sticker on your laptop!

Some links for reference:




Polar Squad

Sharing stories from the DevOps world

Yair Etziony

Written by

DevOps Consultants for Polar Squad Berlin, I started with VAX\VMS, Dos and now i work with Kubernetes, containers and cloud native environment.

Polar Squad

Sharing stories from the DevOps world