Praki Prakash
Jul 3 · 6 min read
MULTIPLE GENERATIONS OF STARS IN THE TARANTULA NEBULA By The Hubble Heritage Team
https://commons.wikimedia.org/w/index.php?curid=461570

Introduction

It’s really easy to provision a cluster on any of the Kubernetes
cluster providers. All it takes is a few clicks on a web page, a
loaded credit card and your cluster materializes just like that! But,
what if you want to save a few bucks and setup a cluster in your home
or office network, on some hardware you have lying around?

There appear to be a number of solutions that people have
developed. After taking a look at some of them, I decided to do
something that catered to my specific needs. If it feels like a bit
like NIH syndrome and you don’t like that sort of thing, please hit
the back button now!

If you are still here, great! I will show you how you can bring up a
Kubernetes cluster with Vagrant and Ansible. There are some pitfalls in
using Vagrant boxes for Kubernetes nodes that are attributable to its networking configuration. But the upside is a better understanding of troubleshooting of Kubernetes networking configuration. In the end, we have a single shell command which creates the cluster and another one to install Wordpress and MySql.

My hardware is an old Sun server running Ubuntu 18.04. You can use any decent hardware that can support Virtual Box VMs. All the scripts are tailored for that Ubuntu distribution. While it should be easy to adapt this to suit your choice of distribution, it will require some modifications.

All the source code is available in GitHub repo kube.


The TL;DR Version

git clone https://github.com/MonadicT/kube
  • Review and adjust cluster.conf file to suit your needs. The variables, their default values are shown below.
cluster.conf
  • Ensure you have Vagrant installed
  • Ensure you have Ansible installed
sudo apt install ansible
  • Change directory to cloned repo
  • Create your cluster with (be patient! some steps do take time)
./create-cluster.sh
  • Verify cluster build with
./verify-cluster.sh
  • Deploy Wordpress
./deploy-wordpress.sh

Review the output from ./verify-cluster.sh. If everything worked as
expected, you should have a fully functional Kubernetes cluster on
hand. Your Wordpress deployment can be viewed at
http://<MASTER_IP>:31234.

Longer Version

Initial Preparation

  • Clone kube repo using git.
  • Install Ansible if needed. If you are on a Ubuntu system, sudo apt install ansible should do the trick.
  • Install Vagrant if needed. Please follow the directions below

Creating Virtual Machines

Cluster formation requires nodes to run Kubernetes masters and workers. We will create the necessary virtual machines using Vagrant and Virtual Box. Our cluster will have a single master (not something you would ever do in a production environment), three worker nodes and a node for NFS server to act as an external storage provisioner.

The cluster.conf file can be edited to alter the number of worker nodes created. You can also change the IP addresses of the master, workers and NFS nodes if the default values clash with your network. Note that I chose to use bridge networking and expose all the nodes to my home LAN. I also use the address range 192.168.1.8, 192.168.1.13 which are static IP addresses in my environment. Change these by editing cluster.conf and not Vagrant file directly.

vagrant up will bring up all the nodes.

It’s also easy to tear down the cluster using Vagrant’s destroy command.

vagrant destroy -f will destroy the cluster.

Note that Vagrantfile is recreated whenever create-cluster.sh is executed. Any manual changes made to Vagrantfile will be lost!

Ansible Setup

Ansible happens to be a great tool for automating commands and is invaluable when you need to work with multiple machines as we do here. Ansible can be installed on most OS and for my Ubuntu host machine, the following does the trick.

sudo apt install ansible

When vagrant creates boxes, it sets up a configuration directory in .vagrant with SSH connection details for all the machines. This lets you connect to a machine with vagrant ssh master. However, Ansible requires a more traditional SSH access to nodes. Luckily, vagrant ssh-config outputs a configuration that is usable by OpenSSH clients. Our streak of luck continues with Ansible which allows customization of SSH command. We specify the location of ssh_config with -F option and Ansible can execute SSH commands.

This creates our SSH config file ssh_config.

vagrant ssh-config > ssh_config

This informs Ansible to use ssh_config.

[ssh_connection]
ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s -F ssh_config

With this accomplished, we can run the playbooks to install Kubernetes software on the nodes.

Ansible Inventory File

Ansible’s configuration file resides in /etc/ansible/ansible.cfg. By default, Ansible operates on the inventory of machines maintained in /etc/ansible/hosts. With a local copy of ansible.cfg, we can modify Ansible's behavior as needed. Here is the changed part of the configuration that lets us maintain the inventory of files in hosts file.

[defaults]# some basic default values...inventory      = hosts

Note that hosts file is recreated when you execute create-cluster.sh. To change the number of workers, cluster.conf file should modified.

Create Kubernetes Master

Executing the following command configures Kubernetes master.

ansible-playbook kmaster.yml

There are a few things worth noting in master.yml.

- name: Pick IP from the same network as master
set_fact: hostip="{{ ansible_all_ipv4_addresses|ipaddr(master_ip_nw) }}"

The task above is selecting an IPv4 address from the network specified in master_ip_nw in cluster.conf. In our case, Vagrant creates two Ethernet interfaces, enp0s3 and enp0s8. Flannel, the POD network that I am using in the cluster, selects the first network interface enp0s3 which is Vagrant's NAT network interface and unsuitable for Kubernetes. We need Flannel to use enp0s8 interface so that packets get routed as expected.

The following Ansible task instructs the kernel to send the packets received by Virtual Box bridge interface to iptables for processing.

- name: Send bridge packets to iptables for processing
block:
- lineinfile:
path: /etc/sysctl.conf
line: net.bridge.bridge-nf-call-iptables=1
create: yes
- lineinfile:
path: /etc/sysctl.conf
line: net.bridge.bridge-nf-call-ip6tables=1
create: yes
- command: sysctl net.bridge.bridge-nf-call-iptables=1
- command: sysctl net.bridge.bridge-nf-call-ip6tables=1
become: true

Lastly, here we initialize the cluster and inform the pod network CIDR and the IP address API server should listen at.

- name: Initialize cluster
command: kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address "{{ hostip[0] }}"
when: inited.rc > 0
become: true

Create Kubernetes Workers

ansible-playbook kworker.yml

kworker.yml is the Ansible playbook to configure worker<N> nodes. This prepares the worker nodes by installing Kubernetes software and NFS client software.

Note that worker playbook doesn't run the cluster join command. After the execution of the playbook, create-cluster.sh obtains the token needed to join the cluster and runs it on each worker node as shown below.

# Join all workers to cluster
JOIN_CMD=$(ansible masters -m shell -b -a "kubeadm token create --print-join-command"|awk '{sub(/.*>>/, "");print}')
ansible workers -m shell -b -a "$JOIN_CMD"

Create Storage Provisioner

A storage provisioner is handy to have in our cluster. I chose NFS as the external storage provisioner. The nfs.yml playbook configures the nfs box.

Bash it all in!

The above steps are the major building blocks of my approach. create-cluster.sh, has all the glue to pull them together so that we can create the cluster using the following command.

./create-cluster.sh

Once we have the cluster created, we can run the command below and examine the output produced for any errors. In particular, DNS lookups must work or our cluster will be unusable.

./verify-cluster.sh

Deploy Wordpress and MySql

Execute ./deploy-wordpress.sh to deploy Wordpress and MySql in your cluster. Once completed, you can navigate to http:<MASTER_IP>:31234 and proceed with installing Wordpress and configuration.

WordPress

Closing Remarks

So, that concludes our foray into provisioning a Kubernetes cluster. Being able to create (multiple) and destroy clusters (vagrant destroy -f) at will has been very useful in my work. Hopefully, you will give this a try and find it helpful.

Praki Prakash

Written by

I Make Software. Elixir, Elm, Python. Lots of Python, Java, Clojure, SmallTalk and C. HA73 Java :) Love Emacs.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade