Setting up a Kubernetes cluster with Kubespray

This is the first blog post in the series where I’ll show how I’ve setup a kubernetes cluster running on bare metal in my home lab to host a blog system and a few side projects I’m working on.

Hardware & Host OS Setup

For now the hardware for the kubernetes nodes are two i5 chromeboxes with 4GB of RAM, which I’ve bough for around C$300 with tax and shipping on ebay. I’ve replaced their firmware with SEA BIOS and chromeos with ubuntu server 18.10 before installing kuberbetes on them using Kubespray.

Once I start adding more workloads to this cluster I plan on adding a couple more chromebox nodes and upgrade them to 16GB of RAM each so I can setup a HA cluster. While that doesn’t happens, I’m running a single master cluster and allowing PODs to be scheduled on the master node.

Bootstraping your cluster with Kubespray

Kubespray is an Ansible playbook and some utility scripts that can be used to setup a kubernetes cluster.

Before you can use Kubespray, you’ll need to install Ansible, which can be easily done using apt, and configure your cluster nodes with fixed IPs, which I did by creating DHCP reservations using my router.

You’ll also need to configure access using your ssh key in all of your nodes, so Ansible can connect to your servers to provision your cluster. In my case the node IPs were and, so I configured key based access using:

$ ssh-copy-id user@
$ ssh-copy-id user@

With ansible installed and the basic server setup done, it’s time to start using Kubespray. First, checkout the kubespray project from github (If you’re using Ubuntu 18.10 the install will fail because 18.10 doesn’t have docker ce available through apt, so use my kubespray fork with an workaround for docker ce while the problem is not fixed):

$ git clone

Follow the steps from Kubespray’s Getting Started Guide to create an inventory file for your cluster and them review the inventory and default parameters as you may want to modify some of them.

The changes I did to my hosts.ini and group_vars files were:

  • Modified the hosts.ini inventory file generated by the Kubespray contrib inventory generation script to remove the etcd and master services from the second node of my cluster (I want to squeeze the most out of my limited hardware and don’t care much about HA at this point). The snippet below shows the original hosts.ini file with the lines I’ve commented out:
server1 ansible_host= ip=
server2 ansible_host= ip=
# We don't want server2 to act as master
# server2
# We don't want server2 to run etcd
# server2
  • Increased the RAM allocated to etcd from 512M to 768M by changing etcd_memory_limit: "512M" to etcd_memory_limit: "768M" in group_vars/etcd.yml
  • Enabled upstream dns servers by uncommenting the lines below ingroup_vars/all/all.yml:
< upstream_dns_servers:
< -
< -
  • Made kubespray generate a kubeconfig file on the computer used to run Kubespray by setting kubeconfig_localhost to true in group_vars/k8s-cluster/k8s-cluster.yml. This file will later be used to configure kubectl to access the cluster.
kubeconfig_localhost: true
  • Modified group_vars/k8s-cluster/addons.ymlto enable the local volume provisioner so persistent volumes can be used and the cert manager to later be able to automatically provision SSL certs using Let’s Encrypt:
local_volume_provisioner_enabled: true
cert_manager_enabled: true

With the inventory files and parameters configured, it’s now time to run the ansible-playbook to bootstrap your kubernetes cluster:

$ ansible-playbook -i inventory/mycluster/hosts.ini  cluster.yml -b -v --private-key=~/.ssh/id_rsa -K

The command above will generate tons of output and after a few seconds (or minutes), you will have your cluster up and running and an artifacts directory will be created in the same directory where you saved your Ansible inventory (hosts.ini) containing an admin.config file that will later be used to connect to your cluster using kubectl.

Connecting to your cluster

If you haven’t done so, install kubectl. On ubuntu you can do it using snap:

sudo snap install kubectl --classic

Then export a KUBECONFIG environment variable pointing to the admin.conf file that was generated by Kubespray and use kubectl to verify your cluster nodes are up and running:

$ export KUBECONFIG=/home/leonardo/kubespray/inventory/mycluster/artifacts/admin.conf
$ kubectl get nodes
server1 Ready master,node 3d18h v1.13.2
server2 Ready node 3d18h v1.13.2

You can also connect to the kubernetes dashboard. Get the url using kubectl cluster-info :

$ kubectl cluster-info 
Kubernetes master is running at
coredns is running at
kubernetes-dashboard is running at
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Open the kubernetes-dashboard url in your browser and ignore the self-signed certificate alert:

You’ll be taken to the authentication page:

Kubernetes Dashboard Auth Page

Get a token for the clusterrole-aggregation-controller with the command below and use it to login to the dashboard:

kubectl -n kube-system describe secrets \
`kubectl -n kube-system get secrets | awk '/clusterrole-aggregation-controller/ {print $1}'` \
| awk '/token:/ {print $2}'
Kubernetes Dashboard Home
Node resource usage

Deploying your first application!

If you can’t wait to deploy your first application, you can quickly get an nginx running on your cluster using the commands below.

Deploy nginx using a sample deployment file:

$ kubectl apply -f
deployment.apps/nginx-deployment created

Check the deployment status until you get Available: true in the output of the command below:

$ kubectl describe deployment nginx-deployment
Name: nginx-deployment
Namespace: default
CreationTimestamp: Sat, 09 Feb 2019 18:54:50 -0600
Labels: <none>
Annotations: 1
Selector: app=nginx
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Image: nginx:1.7.9
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-76bf4969df (2/2 replicas created)
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-76bf4969df to 2

Now expose your service so you can access it from outside the cluster and check the port where your service will be available:

$ kubectl expose deployment/nginx-deployment --type="NodePort" --port 80
service/nginx-deployment exposed
$ kubectl get service nginx-deployment
nginx-deployment NodePort <none> 80:30489/TCP 6m40s

In my case the port where the service was exposed was 30489. Accessing any of your server ips on that port using a browser will result in the default nginx page being displayed:

Now that you’re done with our kubernetes Hello World, we can clean up the deployment and service used to expose it:

$ kubectl delete service nginx-deployment
service "nginx-deployment" deleted
$ kubectl delete deployment nginx-deployment
deployment.extensions "nginx-deployment" deleted

Check back later for the next posts in the series:

  1. Run a blog on kubernetes using Ghost
  2. Using dynamic DNS with Kubernetes and Route53