Setting up a Kubernetes cluster with Kubespray

Leonardo Souza Mario Bueno
6 min readFeb 10, 2019

--

This is the first blog post in the series where I’ll show how I’ve setup a kubernetes cluster running on bare metal in my home lab to host a blog system and a few side projects I’m working on.

Hardware & Host OS Setup

For now the hardware for the kubernetes nodes are two i5 chromeboxes with 4GB of RAM, which I’ve bough for around C$300 with tax and shipping on ebay. I’ve replaced their firmware with SEA BIOS and chromeos with ubuntu server 18.10 before installing kuberbetes on them using Kubespray.

Once I start adding more workloads to this cluster I plan on adding a couple more chromebox nodes and upgrade them to 16GB of RAM each so I can setup a HA cluster. While that doesn’t happens, I’m running a single master cluster and allowing PODs to be scheduled on the master node.

Bootstraping your cluster with Kubespray

Kubespray is an Ansible playbook and some utility scripts that can be used to setup a kubernetes cluster.

Before you can use Kubespray, you’ll need to install Ansible, which can be easily done using apt, and configure your cluster nodes with fixed IPs, which I did by creating DHCP reservations using my router.

You’ll also need to configure access using your ssh key in all of your nodes, so Ansible can connect to your servers to provision your cluster. In my case the node IPs were 192.168.0.28 and 192.168.0.29, so I configured key based access using:

$ ssh-copy-id user@192.168.0.28
$ ssh-copy-id user@192.168.0.29

With ansible installed and the basic server setup done, it’s time to start using Kubespray. First, checkout the kubespray project from github (If you’re using Ubuntu 18.10 the install will fail because 18.10 doesn’t have docker ce available through apt, so use my kubespray fork with an workaround for docker ce while the problem is not fixed):

$ git clone git@github.com:kubernetes-sigs/kubespray.git

Follow the steps from Kubespray’s Getting Started Guide to create an inventory file for your cluster and them review the inventory and default parameters as you may want to modify some of them.

The changes I did to my hosts.ini and group_vars files were:

  • Modified the hosts.ini inventory file generated by the Kubespray contrib inventory generation script to remove the etcd and master services from the second node of my cluster (I want to squeeze the most out of my limited hardware and don’t care much about HA at this point). The snippet below shows the original hosts.ini file with the lines I’ve commented out:
[all]
server1 ansible_host=192.168.0.28 ip=192.168.0.28
server2 ansible_host=192.168.0.29 ip=192.168.0.29
[kube-master]
server1
# We don't want server2 to act as master
# server2
[kube-node]
server1
server2
[etcd]
server1
# We don't want server2 to run etcd
# server2
[k8s-cluster:children]
kube-node
kube-master
[calico-rr]
  • Increased the RAM allocated to etcd from 512M to 768M by changing etcd_memory_limit: "512M" to etcd_memory_limit: "768M" in group_vars/etcd.yml
  • Enabled upstream dns servers by uncommenting the lines below ingroup_vars/all/all.yml:
< upstream_dns_servers:
< - 8.8.8.8
< - 8.8.4.4
  • Made kubespray generate a kubeconfig file on the computer used to run Kubespray by setting kubeconfig_localhost to true in group_vars/k8s-cluster/k8s-cluster.yml. This file will later be used to configure kubectl to access the cluster.
kubeconfig_localhost: true
  • Modified group_vars/k8s-cluster/addons.ymlto enable the local volume provisioner so persistent volumes can be used and the cert manager to later be able to automatically provision SSL certs using Let’s Encrypt:
local_volume_provisioner_enabled: true
.
.
.
cert_manager_enabled: true

With the inventory files and parameters configured, it’s now time to run the ansible-playbook to bootstrap your kubernetes cluster:

$ ansible-playbook -i inventory/mycluster/hosts.ini  cluster.yml -b -v --private-key=~/.ssh/id_rsa -K

The command above will generate tons of output and after a few seconds (or minutes), you will have your cluster up and running and an artifacts directory will be created in the same directory where you saved your Ansible inventory (hosts.ini) containing an admin.config file that will later be used to connect to your cluster using kubectl.

Connecting to your cluster

If you haven’t done so, install kubectl. On ubuntu you can do it using snap:

sudo snap install kubectl --classic

Then export a KUBECONFIG environment variable pointing to the admin.conf file that was generated by Kubespray and use kubectl to verify your cluster nodes are up and running:

$ export KUBECONFIG=/home/leonardo/kubespray/inventory/mycluster/artifacts/admin.conf$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
server1 Ready master,node 3d18h v1.13.2
server2 Ready node 3d18h v1.13.2

You can also connect to the kubernetes dashboard. Get the url using kubectl cluster-info :

$ kubectl cluster-info 
Kubernetes master is running at https://192.168.0.28:6443
coredns is running at https://192.168.0.28:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxy
kubernetes-dashboard is running at https://192.168.0.28:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Open the kubernetes-dashboard url in your browser and ignore the self-signed certificate alert:

You’ll be taken to the authentication page:

Kubernetes Dashboard Auth Page

Get a token for the clusterrole-aggregation-controller with the command below and use it to login to the dashboard:

kubectl -n kube-system describe secrets \
`kubectl -n kube-system get secrets | awk '/clusterrole-aggregation-controller/ {print $1}'` \
| awk '/token:/ {print $2}'
Kubernetes Dashboard Home
Node resource usage

Deploying your first application!

If you can’t wait to deploy your first application, you can quickly get an nginx running on your cluster using the commands below.

Deploy nginx using a sample deployment file:

$ kubectl apply -f https://k8s.io/examples/application/deployment.yaml
deployment.apps/nginx-deployment created

Check the deployment status until you get Available: true in the output of the command below:

$ kubectl describe deployment nginx-deployment
Name: nginx-deployment
Namespace: default
CreationTimestamp: Sat, 09 Feb 2019 18:54:50 -0600
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"},"spec":{"replica...
Selector: app=nginx
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.7.9
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-76bf4969df (2/2 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-76bf4969df to 2

Now expose your service so you can access it from outside the cluster and check the port where your service will be available:

$ kubectl expose deployment/nginx-deployment --type="NodePort" --port 80
service/nginx-deployment exposed
$ kubectl get service nginx-deployment
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-deployment NodePort 10.233.19.253 <none> 80:30489/TCP 6m40s

In my case the port where the service was exposed was 30489. Accessing any of your server ips on that port using a browser will result in the default nginx page being displayed:

Now that you’re done with our kubernetes Hello World, we can clean up the deployment and service used to expose it:

$ kubectl delete service nginx-deployment
service "nginx-deployment" deleted
$ kubectl delete deployment nginx-deployment
deployment.extensions "nginx-deployment" deleted

Check back later for the next posts in the series:

  1. Run a blog on kubernetes using Ghost
  2. Using dynamic DNS with Kubernetes and Route53

--

--