Kubernetes From Scratch
Kubernetes Without Minikube or Microk8s
I’ve installed several pre-configured versions of Kubernetes, such as Minikube and Microk8s, and they work well for creating a Kubernetes sandbox. Today I’m going to try installing from scratch to dig deeper behind the scenes.
Kubernetes isn’t a monolith, it’s a lot of components working together, and a lot of it seems like magic right now. But I always want to know the magician’s tricks, so I’m going to see what sleight of hand goes on to make everything work.
To follow along, you will have to have a VM provisioned (I’ll talk about that more later). You will also need moderately advanced knowledge of Linux, Linux command line, and networking. You will also need a tremendous amount of patience, as with any endeavors with computers.
To start, I’m going to provision a VM with Ubuntu 18.04 with two vCPUs, 4G of memory, and 50G of HD. I’m using KVM on bare metal. Here is a template of the steps I used to set it up on the main host:
I named my VM “kube1” (assuming there will be more later) and changed the RAM to 4096, the disk size to 50, and the vCPUs to 2. Answer the prompts with default or reasonable values. Make sure you install the OpenSSH server but nothing else when it asks what software you want to preinstall. When it’s done, and you have the IP address, you can add it to your
/etc/hosts file so you can refer to it by name. If you want to learn more about KVM and how to install it, you can refer to my article Playing with VMs and Kubernetes.
To keep things straight, I will refer to the host running KVM as the main host, and your development host (most likely the laptop or desktop you’re reading this on) as the local computer. If you’re using a cloud-provisioned VM, you won't have a main host. You will have to figure out what needs to happen in your cloud control panel or the command line on your local computer that acts as the main host (such as the
gcloud command for Google Cloud Platform).
The first thing that is needed is the
kubeadm/kubectl/kubelet commands. I’m going to follow along with the official installation but specifically for Ubuntu 18.04 VM, and add whatever needs clarification or doesn’t quite work.
To prep your new VM for installation, first make sure swap is off:
If you will be making more than one node, you will have to ensure that the VMs are created with unique MAC addresses and product UUIDs. KVM does this for you. Probably most VMs will be unique unless you clone an existing VM. Then enable bridges and overlays by adding a couple of lines to your
And add a couple of lines to your
# added for kubernetes bridge
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
Then reboot your VM so it all takes effect.
Your new VM should have no
iptables rules, and you can verify this with the
sudo iptables -L command, which should list an empty ruleset:
Chain INPUT (policy ACCEPT)
target prot opt source destinationChain FORWARD (policy ACCEPT)
target prot opt source destinationChain OUTPUT (policy ACCEPT)
target prot opt source destination
If you’re paranoid, you can restrict the ports to only those listed in the official installation, but that’s a landmine I’m not going to step on.
Because Kubernetes is a container orchestration system, we need a container system that it can orchestrate. There are a number of container systems you can use, but we’re going to use containerd. It’s a two-line install for Ubuntu 18.04:
sudo apt-get update
sudo apt-get install containerd
This should create
Now you can install
kubeadm package is the main API for Kubernetes. The
kubectl package is a command-line interface for the Kubernetes API. And the
kubelet package interfaces with the container system to run Pods.
A few words about terminology. The overall entity in Kubernetes is called a cluster. Each cluster can have one or more nodes. There are two types of nodes, control-plane, and worker. There must be one and only one control-plane node. (You can actually have more than one control plane in an HA setup fronted by a load balancer.) The control-plane node is what we are working on now. If we add worker nodes later, all of these steps must be repeated.
To test that
kubeadm has access to
containerd that we installed early, we can run
sudo kubeadm config images pull. It’ll spend some time pulling a few images that it needs, and we know that it can talk to
We need to think ahead a little bit here. Kubernetes requires a container network interface, or
CNI, so that all the Pods can communicate with each other. But to initialize the cluster, we need to pass in some information about how it will utilize the
CNI. So we need to decide on which
CNI to use, as there are several. I’m going to choose Flannel because I live in Seattle. By default, Flannel uses CIDR of 10.244.0.0/16, so we have to pass that into the
init command. Let’s try it out and see what blows up.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
Success! But this is no time for self-congratulation. We still have a few steps to go just for one node. There are some steps at the end of the
init command that are important. If you plan on creating worker nodes later, be sure to copy the
join command as printed somewhere, to reference later. The token listed only lasts for 24 hours, so if you want to create a new node after that, you’ll have to get a new token with the command
kubeadm token create.
Let’s get the configuration in a publically available place for
kubectl to use.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
This will create the default configuration file that
kubectl uses. To verify, run
kubectl config view.
The output from the
init command also instructs you to deploy your
CNI, which we had already decided would be Flannel. They have the configuration you’ll need on GitHub, so apply them now:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
Now see if you can list everything running with
kubectl get all --all-namespaces. You should get a list of the usual suspects. Copy the configuration file to the main host, and you should be able to access your new cluster there. I’ll copy it to the
~/.kube/kube1config file and set the
KUBECONFIG environment variable so I don’t have to merge with existing config files. Now running
kubectl get all --all-namespaces on the main host should give the same listing.
Now I’m going to repeat all the steps in this article up to the
kubeadm init. You can grab a cup of coffee while I do that.
I’m back. I created a new VM called “kube2” and installed
kubelet, same as before. Now instead of using the
kubeadm init command, we’ll use the
kubeadm join command we saved from the previous
Now let’s check it out, listing the nodes back at the main host with
kubectl get nodes.
NAME STATUS ROLES AGE VERSION
kube1 Ready master 73m v1.18.1
kube2 Ready <none> 65s v1.18.1
We still have a few pieces of the puzzle left to add. When you install via an all-in-one installer like Minikube, all of these pieces are included, but we’ll have to do it manually. We need some storage, a container registry, and some form of ingress or load-balancer. I’ll tackle those in my next article.