Setting Up Single Master Kubernetes On Bare Metal

*This post was originally published July 2018.*

Following up on my first post, I decided to set up Kubernetes on my bare metal machines at home. I explored various routes to do this, from CoreOS’s Tectonic to Canonical’s Distribution of Kubernetes. I even tried to set up MAAS and even briefly attempted to set up OpenStack, but then I *thought* I found the perfect out of the box solution: Rancher. Starting with Rancher 2.0, it uses k8s as its container management system. The best part about Rancher is that it is the fastest way to get a Kubernetes cluster running from scratch. Rancher is also a great way to learn more about Kubernetes if you are not familiar with yaml files, kubectl, or the Kubernetes API. If you want to take 15 minutes to explore and you have linux machines where you can run privileged docker containers, I recommend giving them a whirl.

As I spent more time with Rancher, I realized I didn’t want the overhead of Rancher’s management system and I increasingly found their UI to be buggy (at the time). I then continued my quest for other solutions and stumbled upon Operos. I liked Operos a lot because the setup was extremely easy, dashboards were setup automatically, worker nodes could be joined via PXE boot, and it came out of the box configured with Ceph for persistent storage. I ended up foregoing it because 1) the project at the time of this writing had been dormant for two months, 2) the default security setup didn’t give access to the cluster-admin role which made certain deployments (and helm charts) tricky, 3) used Kubernetes 1.8 rather than the latest (1.11) and 4) the other workers needed to be on its own subnet with the Operos controller serving PXE and DHCP which requires me to install another NIC. I may come back to this at some point, especially once the project picks up again.

In the end, I decided to set up using good ole’ kubeadm. There are many guides to do this from Hanselman’s installation on Raspberry Pis to Josh Rendek’s article. Although this post overlaps with these guides, there are a few differences, especially regarding Ingress, Persistent Storage, Dashboard, and Helm. So I thought I’d share my setup here.

Basic Cluster Installation

  1. I provisioned two Ubuntu machines — kubernetesmaster (2 GB RAM) and kubernetesworker (4 GB RAM).
  2. I ran
sudo apt-get update 
sudo apt-get upgrade

3. I disabled swap using the instructions here.

4. I then installed Docker following the instructions here.

5. I decided to use Flannel as my pod network provider. I tried Calico but it doesn’t play nice with Helm and Tiller. The first command below is to prepare the host for Flannel and the second is to initialize the Kubernetes master.

sysctl net.bridge.bridge-nf-call-iptables=1 kubeadm init --pod-network-cidr=10.244.0.0/16

6. This takes a while but after it’s done setting up, it will print some nice messages like below:

Your Kubernetes master has initialized successfully!  
To start using your cluster, you need to run the following as a regular user: 
   mkdir -p $HOME/.kube 
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: 
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node as root: 
kubeadm join 192.168.2.85:6443 --token xxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxx

7. In my Kubernetes worker, I ran the kubeadm join command shown above as sudo.

8. I then ran the mkdir, sudo cp and sudo chown commands as shown in step 6.

9. I didn’t want to manage my cluster from the master controller so I outputed the kube config from master via cat like below and cut and pasted to my local kubeconfig on my workstation.

cat ~/.kube/config

10. I then installed Flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

Side note: I made sure to tear down previous masters and nodes by using kubectl drain followed by kubectl delete node and then kubeadm reset.

This got Kubernetes up and running. Next I wanted to install Helm and Tiller.

Installing Helm and Tiller

Tiller is the kubernetes package manager and helm is the client used to install packages.

  1. Install Helm Client.
  2. Before setting up Tiller, I needed to set up the correct role based access controls (RBAC). I created the following YAML file called rbac-config.yaml.
apiVersion: v1 
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system

3. I then ran

kubectl apply -f rbac-config.yaml

4. I then initialized Tiller by executing

helm init --service-account tiller

After installing Helm and Tiller, I was now able to install my first package, AppsCode Voyager which is a nice HAProxy TCP load balancer. I did this using Helm.

Installing Kubernetes Dashboard

The kubernetes dasbhoard gives a nice GUI to manage clusters and nodes.

Because the dashboard needs to be exposed outside the cluster, the easiest way to do that is to expose via NodePort. I also wanted the dashboard to be deployed to a very specific node every time. To do this, I have to first assign a label to the node I want to deploy the dashboard to. Then, I need to create a self signed certificate because the “out of the box” dashboard wants SSL. Next, I have to tweak the dashboard to expose to that specific node and port. Last, I need to create an admin user to login to the dashboard.

  1. Assuming the dashboard will be installed to the node “kubernetesworker”, I assigned the label
kubectl label nodes kubernetesworker dashboardworker=true

2. I generated a self-signed certificate as described here. I had to make sure to fill out all the values, especially the CommonName or else Chrome will not allow you to bypass the self-signed certificate.

3. Assuming the self-signed certificates are stored in the path “$HOME/certs”, I ran

kubectl create secret generic kubernetes-dashboard-certs --from-file=$HOME/certs -n kube-system

4. I downloaded the file https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml and made the following edits:

  • Right above the section “containers:”, I added the part that is marked #NEW below
template:     
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
nodeSelector: #NEW
dashboardworker: "true" #NEW
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
ports:
- containerPort: 8443
protocol: TCP
  • In the section of “Dashboard Service”, I made changes marked #NEW below
kind: Service 
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort #NEW
ports:
- port: 32000 #NEW
nodePort: 32000 #NEW
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
  • The above changes will force the deployment of the Kubernetes Dashboard to only install on the node with the dashboardworker label set to true and expose the service via NodePort 32000.

5. In the directory where the dashboard was changed in the above step, I deployed the dashboard using

kubectl apply -f kubernetes-dashboard.yaml

6. The dashboard is exposed to the URL “https://kubernetesworker:32000”. At that endpoint, a token is needed to login. I followed the directions here to setup a sample user and get the token. (KubeConfig can be used but it is not set up for user/password authentication)

Setting up NFS Storage

I now needed to set up a persistent storage to store data for my apps. For example, my travel site contains pictures that just aren’t suited for a database so I need a place to store them.

First, I needed to set up an NFS server. There are many guides based on linux flavors, but I used the one for Ubuntu. I put my NFS mount to /nfsmount.

To set up the NFS storage class, I used the guide from here. I ran kubectl apply -f on the below YAMLs.

  1. I needed to set up the service account and permissions. I created these YAMLs:

a) Service Account

apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner

b) Cluster Role

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]

c) Cluster Role Binding

kind: ClusterRoleBinding 
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io

2. I now needed to define a storage class

apiVersion: storage.k8s.io/v1 
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: saninsoftware/nfs # or choose another name, must match deployment's env PROVISIONER_NAME

3. I then defined the nfs-client-provisioner

kind: Deployment 
apiVersion: extensions/v1beta1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: saninsoftware/nfs #CHANGE ME
- name: NFS_SERVER
value: 192.168.99.2 #CHANGE ME
- name: NFS_PATH
value: /nfsmount #CHANGE ME
volumes:
- name: nfs-client-root
nfs:
server: 192.168.99.2 #CHANGE ME
path: /nfsmount #CHANGE ME

To test the above, I created a persistent volume claim (PVC) and a sample pod that writes to it. I kept the PVC as a separate file because I wanted to be able to reuse data stored in the volume in the event I delete the pod and deployment and then recreate it.

  1. I created the PVC Claim for testing
kind: PersistentVolumeClaim 
apiVersion: v1
metadata:
name: test-pvc-claim
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10M

2. I created the pod that will write to it

kind: Pod 
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: gcr.io/google_containers/busybox:1.24
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-pvc-claim

3. I logged into the NFS server and checked for the file “SUCCESS”.

Ingress

Above, I spoke of using AppsCode Voyager which is a nice HAProxy solution. For a simple NGINX LoadBalancer, I found MetalB to be a very easy solution to deploy.

The following is an example of EchoServer using AppsCode Voyager where the ingress server is explicitly specified:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: echoserver
namespace: default
spec:
replicas: 4
template:
metadata:
labels:
app: echoserver
spec:
containers:
- image: gcr.io/google_containers/echoserver:1.0
imagePullPolicy: Always
name: echoserver
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: echoserver
namespace: default
spec:
ports:
- port: 8080
selector:
app: echoserver
---
apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
name: echoserveringress
namespace: default
annotations:
ingress.appscode.com/type: HostPort
spec:
rules:
# CHANGE ME
- host: myloadbalancer.domain
http:
port: 8181
paths:
- path: /
backend:
serviceName: echoserver
servicePort: 8080

The same echo server could be used with MetalB to be exposed outside the cluster. The difference below is that the ingress server is not specified and we must retrieve the external IP using kubectl as described in the tutorial:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: echoserverml
namespace: default
spec:
replicas: 2
template:
metadata:
labels:
app: echoserverml
spec:
containers:
- image: gcr.io/google_containers/echoserver:1.0
imagePullPolicy: Always
name: echoserverml
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: nginxml
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: echoserverml
type: LoadBalancer

I now have a basic Kubernetes cluster to play with. I can take additional discarded machines and laptops and use “kubeadm join” to add additional resources to my apps! The best part is that Kubernetes is “Cloud Native” so if I decided to move to GCP, Azure, or AWS the friction should be minimal!

All of the above can also be found on my GitHub page.