Spring Boot CI/CD on Kubernetes using Terraform, Ansible and GitHub: Part 5

Martin Hodges
12 min readNov 6, 2023

--

Part 5: Creating a Kubernetes cluster

This is part of a series of articles that creates a project to implement automated provisioning of cloud infrastructure in order to deploy a Spring Boot application to a Kubernetes cluster using CI/CD. In this part we configure the Virtual Private Server (VPS) nodes to form a Kubernetes cluster using Ansible.

Follow from the start — Introduction

If you have been following along with this series, you will now have 3 VPSs set up and configured with a base-level configuration. We will now enhance that configuration to install a Kubernetes cluster.

You can find the code for this part here: https://github.com/MartinHodges/Quick-Queue-IaC/tree/part5

Ansible Roles

We are building a 3 node cluster with one master and two worker nodes.

All nodes require Docker to be installed so that Docker images pulled from Docker Hub can be run in Docker containers.

Then the master and worker nodes deviate with most of the Kubernetes cluster definition occurring on the master. Once set up, the nodes can then be joined into the cluster.

In order to manage these similarities and differences, I am using Ansible roles to manage our configuration files. By using roles, we can overlay configurations based on the role of the server we are configuring.

We will define four roles:

  • docker
  • k8s_base
  • k8s_master
  • k8s_node

We will then assign these roles to our nodes:

  • master = docker, k8s_base, k8s_master
  • node = docker, k8s_base, k8s_node

Ansible Configuration

We will create our Kubernetes configuration in another folder called ansible/k8s. All our configuration will be added to this folder.

ansible/k8s/ansible.cfg

Like the bootstrap configuration, we need to tell Ansible where to find its inventory file, which user to use and which key to use.

Create ansible/k8s/ansible.cfg:

[defaults]
inventory = ../inventory
private_key_file = ~/.ssh/qq_rsa
remote_user = kates

ansible/k8s/k8s.yml

Now we tell Ansible which roles each node should be configured with.

Create ansible/k8s/k8s.yml:

---

- hosts: k8s_master
become: true
roles:
- docker
- k8s_base
- k8s_master

- hosts: k8s_node
become: true
roles:
- docker
- k8s_base
- k8s_node

You can see that this assumes that all the configuration will be as a root user (kates is enabled to use sudo) with become: true.

It also defines the roles for each node as described above, using the roles parameter.

This configuration cannot be applied without the roles being defined. Ansible expects all roles to be defined in a roles subfolder.

Create the ansible/k8s/roles folder. To prevent errors when trying out plays, create the following:

  • ansible/k8s/roles/docker
  • ansible/k8s/roles/k8s_base
  • ansible/k8s/roles/k8s_master
  • ansible/k8s/roles/k8s_node

docker role

First we will create the docker role in the ansible/k8s/roles/docker folder. This will consist of:

  • tasks/main.yml defining the tasks of the roles
  • vars/main.yml defining a set of constants for the tasks
  • handlers/main.yml defining the handlers for the playbook

Create ansible/k8s/roles/docker and then create the three subfolders.

roles/docker/tasks/main.yml

- name: Install Docker
tags:
- docker
- k8s_base
- k8s_master
block:
- name: Debian | Configure Sysctl
sysctl:
name: "net.ipv4.ip_forward"
value: "1"
state: present

- name: Debian | Install Prerequisites Packages
package: name={{ item }} state=present force=yes
loop: "{{ docker_dependencies }}"

- name: Debian | Add GPG Keys
apt_key:
url: "{{ docker_url_apt_key }}"

- name: Debian | Add Repo Source
apt_repository:
repo: "{{ docker_repository }}"
update_cache: yes

- name: Debian | Install Specific Version of Docker Packages
package: name={{ item }} state=present force=yes install_recommends=no
loop: "{{ docker_packages }}"
notify:
- start docker

- name: Enable CRI API
lineinfile:
path: /etc/containerd/config.toml
regexp: "disabled_plugins.*"
line: "disabled_plugins = []"
state: present

- name: Configure containerd - get default config
shell: containerd config default | tee /etc/containerd/config.toml

- name: Enable SystemdCgroup
shell: sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
register: containerd

- name: Restart containerd after config change
service:
name: containerd
state: restarted
when: containerd.changed

- name: Debian | Start and Enable Docker Service
service:
name: docker
state: started
enabled: yes

This demonstrates the use of a block that contains further tasks. It effectively makes sub-tasks. The parent task has tags of docker, k8s_base and k8s_master. This means this block is configured if Ansible is asked to configure any of these groups.

Breaking it down into sections:

- name: Debian | Configure Sysctl
sysctl:
name: "net.ipv4.ip_forward"
value: "1"
state: present

Docker requires IPv4 forwarding to be enabled.

  - name: Debian | Install Prerequisites Packages
package: name={{ item }} state=present force=yes
loop: "{{ docker_dependencies }}"

This installs the list of docker_dependencies that are defined in the vars/main.yml file (see later).

  - name: Debian | Add GPG Keys
apt_key:
url: "{{ docker_url_apt_key }}"

- name: Debian | Add Repo Source
apt_repository:
repo: "{{ docker_repository }}"
update_cache: yes

- name: Debian | Install Specific Version of Docker Packages
package: name={{ item }} state=present force=yes install_recommends=no
loop: "{{ docker_packages }}"
notify:
- start_docker

These three tasks adds the docker repository and then installs docker itself.

Note that a start_docker handler is notified of the updates. This will restart docker once the play has finished.

Docker on Debian 11 is not compatible with the Container Network Interface (CNI) standards. Kubernetes requires CNI compatible containers and so to use Docker on Debian 11, additional configuration is required of the containerd service.

  - name: Enable CRI API
lineinfile:
path: /etc/containerd/config.toml
regexp: "disabled_plugins.*"
line: "disabled_plugins = []"
state: present

- name: Configure containerd - get default config
shell: containerd config default | tee /etc/containerd/config.toml

- name: Enable SystemdCgroup
shell: sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
register: containerd

First the containerd configuration file is updated (config.toml) using the lineinfile module to change the relevant line. The containerd service is then used to create the required default config.

The containerd is then made part of the system cgroup to allow system resources to be controlled by Kubernetes. When this config change is made the register: containerd command is used to trigger the handler to restart the containerd service.

  - name: Restart containerd after config change
service:
name: containerd
state: restarted
when: containerd.changed

This handler restarts containerd. The service module is used here instead of relying on a handler as we need to restart containerd before we start docker.

  - name: Debian | Start and Enable Docker Service
service:
name: docker
state: started
enabled: yes

Finally the Docker service is started and enabled to ensure it starts on a restart.

We need to define the variables that list the packages to be deployed.

Create roles/docker/vars/main.yml:

docker_dependencies:
- ca-certificates
- gnupg
- gnupg2
- gnupg-agent
- software-properties-common
- apt-transport-https
- curl
docker_packages:
- docker-ce
docker_url_apt_key: "https://download.docker.com/linux/debian/gpg"
docker_repository: "deb [arch=amd64] https://download.docker.com/linux/debian bullseye stable"

Create roles/docker/handlers/main.yml:

- name: start_docker
service:
name: docker
state: started
enabled: yes

This handler will restart docker after the configuration is complete.

As Ansible is declarative, it is possible to run this now before extending it for Kubernetes. From within the k8s folder:

ansible-playbook k8s.yml --tags=docker

The tags argument ensures that ansible only runs those plays tagged with the given values.

k8s_base role

Now we will create the first part of the Kubernetes cluster. There are some applications that all nodes require.

First we create the folder to hold the role: ansible/k8s/roles/k8s_base.

This role will consist of the following folders and files within the folder just created:

  • tasks/main.yml
  • vars/main.yml

Create roles/k8_base/tasks/main.yml:


- name: Set up base config for k8s master and node
tags:
- k8s_base
- k8s_master
- k8s_node
block:
- name: Disable SWAP K8S will not work with swap enabled (1/2)
command: swapoff -a
when: ansible_swaptotal_mb > 0

- name: Debian | Remove SWAP from fstab K8S will not work with swap enabled (2/2)
mount:
name: "{{ item }}"
fstype: swap
state: absent
with_items:
- swap
- none

- name: Debian | Add GPG Key
apt_key:
url: "{{ k8s_url_apt_key }}"
state: present
register: add_repository_key

- name: Debian | Add Kubernetes Repository
apt_repository:
repo: "{{ k8s_repository }}"
update_cache: yes

- name: Debian | Install Dependencies
package: name={{ item }} state=present force=yes install_recommends=no
loop: "{{ k8s_dependencies }}"

- name: Debian | Install Kubernetes Packages
package: name={{ item }} state=present force=yes install_recommends=no
loop: "{{ k8s_packages }}"

Like Docker, this is tagged to ensure it is run with tags of k8s_base, k8s_master and k8s_node.

A block of tasks are then defined.

    - name: Disable SWAP K8S will not work with swap enabled (1/2)
command: swapoff -a
when: ansible_swaptotal_mb > 0

- name: Debian | Remove SWAP from fstab K8S will not work with swap enabled (2/2)
mount:
name: "{{ item }}"
fstype: swap
state: absent
with_items:
- swap
- none

Kubernetes will not work with memory swap space enabled. These two tasks disable it and remove any that has been created. I notice that Binary Lane servers tend not to have swap space enabled but this ensures this is the case.

The first task is a command, which seems to break the idea of a declarative definition, however, when combined with a when statement, it makes this declarative.

The mount task is an example of setting multiple things at a time (with_items) and the state: absent line means ‘make sure it is not there’.

- name: Debian | Remove SWAP from fstab K8S will not work with swap enabled (2/2)
mount:
name: "{{ item }}"
fstype: swap
state: absent
with_items:
- swap
- none

- name: Debian | Add GPG Key
apt_key:
url: "{{ k8s_url_apt_key }}"
state: present
register: add_repository_key

- name: Debian | Add Kubernetes Repository
apt_repository:
repo: "{{ k8s_repository }}"
update_cache: yes

- name: Debian | Install Dependencies
package: name={{ item }} state=present force=yes install_recommends=no
loop: "{{ k8s_dependencies }}"

- name: Debian | Install Kubernetes Packages
package: name={{ item }} state=present force=yes install_recommends=no
loop: "{{ k8s_packages }}"

In the same way as we did with Docker, this loads in the repositories for apt to pick up Kubernetes. It then loads the list of dependencies and packages from the variable definition.

Create roles/k8_base/vars/main.yml:

k8s_dependencies: 
- kubernetes-cni
- kubelet

k8s_packages:
- kubeadm
k8s_url_apt_key: "https://packages.cloud.google.com/apt/doc/apt-key.gpg"
k8s_repository: "deb https://apt.kubernetes.io/ kubernetes-xenial main"

This lists the information required to load the base level of packages for Kubernetes.

It is possible to run this now before extending it further. From within the k8s folder:

ansible-playbook k8s.yml --tags=k8s_base

The tags argument ensures that Ansible only runs those plays tagged with the given values.

k8s_master role

This role configures the master node. In this solution, there is a single master node (Kubernetes allows multiple master nodes) and so this role is only applied to a single node.

The k8s_base configuration loads kubeadm, kubectl and kubelet, which allow the master to create and manage the cluster. These will be used in this confguration.

This role will consist of:

  • tasks/main.yml

Create ansible/k8s/roles/k8_master and then create the subfolder.

Create roles/k8_master/tasks/main.yml:

- name: Set up config for k8s master
tags:
- k8s_master
block:
- name: Debian | Initialise the Kubernetes cluster using kubeadm
become: true
command: kubeadm init --pod-network-cidr={{ k8s_pod_network }}
args:
creates: "{{ k8s_admin_config }}"

- name: Debian | Setup kubeconfig for {{ k8s_user }} user
file:
path: "{{ k8s_user_home }}/.kube"
state: directory
owner: "{{ k8s_user }}"
group: "{{ k8s_user }}"
mode: "0750"

- name: Debian | Copy {{ k8s_admin_config }}
become: true
copy:
src: "{{ k8s_admin_config }}"
dest: "{{ k8s_user_home }}/.kube/config"
owner: "{{ k8s_user }}"
group: "{{ k8s_user }}"
mode: "0600"
remote_src: yes

- name: Debian | Download {{ calico_operator_url }}
get_url:
url: "{{ calico_operator_url }}"
dest: "{{ k8s_user_home }}/{{ calico_operator_config }}"
owner: "{{ k8s_user }}"
group: "{{ k8s_user }}"
mode: "0640"

- name: Debian | Download {{ calico_net_url }}
get_url:
url: "{{ calico_net_url }}"
dest: "{{ k8s_user_home }}/{{ calico_net_config }}"
owner: "{{ k8s_user }}"
group: "{{ k8s_user }}"
mode: "0640"

- name: Debian | Install calico operator {{ calico_operator_config }}
become: false
command: kubectl create -f "{{ k8s_user_home }}/{{ calico_operator_config }}"

- name: Debian | Set CALICO_IPV4POOL_CIDR to {{ k8s_pod_network }}
replace:
path: "{{ k8s_user_home }}/{{ calico_net_config }}"
regexp: "192.168.0.0/16"
replace: "{{ k8s_pod_network }}"

- name: Debian | Install calico pod network {{ calico_net_config }}
become: false
command: kubectl apply -f "{{ k8s_user_home }}/{{ calico_net_config }}"

- name: Debian | wait for k8s API port to open
wait_for:
port: 6443

- name: Debian | Generate join command
command: kubeadm token create --print-join-command
register: join_command

- name: Debian | Copy join command to local file
become: false
local_action: copy content="{{ join_command.stdout_lines[0] }}" dest="{{ k8s_token_file }}"

This role defines the configuration for the tag=k8_master option.

The configuration is described below in sections.

- name: Debian | Initialise the Kubernetes cluster using kubeadm
become: true
command: kubeadm init --pod-network-cidr={{ k8s_pod_network }}
args:
creates: "{{ k8s_admin_config }}"

This initialises the cluster and defines an internal pod network that is only accessible from within the cluster.

A pod is a Kubernetes abstraction of a container. Normally a pod only includes one container but it can contain more, such as sidecar containers that assist the main container.

Containers talk to each other across the cluster through the internal pod network. Kubernetes allocates each container/pod its own IP address within the pod network. Note that Kubernetes does not natively create the pod network (see later).

    - name: Debian | Setup kubeconfig for {{ k8s_user }} user
file:
path: "{{ k8s_user_home }}/.kube"
state: directory
owner: "{{ k8s_user }}"
group: "{{ k8s_user }}"
mode: "0750"

- name: Debian | Copy {{ k8s_admin_config }}
become: true
copy:
src: "{{ k8s_admin_config }}"
dest: "{{ k8s_user_home }}/.kube/config"
owner: "{{ k8s_user }}"
group: "{{ k8s_user }}"
mode: "0600"
remote_src: yes

This section creates a .kube folder in the kates user’s home folder. This sets up the configuration such that the kates user can use kubectl. Note that there are a number of places where a pre-defined variable is used (eg: {{ k8s_user }}). This is done via the global vars file (see later).

    - name: Debian | Download {{ calico_operator_url }}
get_url:
url: "{{ calico_operator_url }}"
dest: "{{ k8s_user_home }}/{{ calico_operator_config }}"
owner: "{{ k8s_user }}"
group: "{{ k8s_user }}"
mode: "0640"

- name: Debian | Download {{ calico_net_url }}
get_url:
url: "{{ calico_net_url }}"
dest: "{{ k8s_user_home }}/{{ calico_net_config }}"
owner: "{{ k8s_user }}"
group: "{{ k8s_user }}"
mode: "0640"

- name: Debian | Install calico operator {{ calico_operator_config }}
become: false
command: kubectl create -f "{{ k8s_user_home }}/{{ calico_operator_config }}"

- name: Debian | Set CALICO_IPV4POOL_CIDR to {{ k8s_pod_network }}
replace:
path: "{{ k8s_user_home }}/{{ calico_net_config }}"
regexp: "192.168.0.0/16"
replace: "{{ k8s_pod_network }}"

- name: Debian | Install calico pod network {{ calico_net_config }}
become: false
command: kubectl apply -f "{{ k8s_user_home }}/{{ calico_net_config }}"

This is a big section. Kubernetes requires a network overlay that allows the pods to communicate, routing the requests within and across nodes. There are several options for this overlay, including Flannel and Calico.

In this section you can see that Calico has been chosen. First the components are downloaded and then loaded into the cluster using kubectl.

    - name: Debian | wait for k8s API port to open
wait_for:
port: 6443

This ensures that the Kubernetes management API has started and ready to accept instructions from kubectl before any further configuration takes place.

    - name: Debian | Generate join command
command: kubeadm token create --print-join-command
register: join_command

- name: Debian | Copy join command to local file
become: false
local_action: copy content="{{ join_command.stdout_lines[0] }}" dest="{{ k8s_token_file }}"

Once the Kubernetes API is up and running, kubeadm is used to obtain the join command instruction to join worker nodes into the cluster. This is saved to a local file on the development machine for configuring the worker nodes.

Global Vars

The previous play made use of some global variables. These are defined in a separate file.

Create ansible/k8s/global_vars/all:

k8s_pod_network: "192.168.0.0/16"
k8s_user: "kates"
k8s_user_home: "/home/{{ k8s_user }}"
k8s_token_file: "join-k8s-command"
k8s_admin_config: "/etc/kubernetes/admin.conf"
k8s_kubelet_config: "/etc/kubernetes/kubelet.conf"
calico_operator_url: "https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml"
calico_operator_config: "tigera-operator.yaml"
calico_net_url: "https://docs.projectcalico.org/manifests/calico.yaml"
calico_net_config: "calico.yaml"

You should now be able to configure the master node using:

ansible-playbook k8s.yml --tags=k8s_master

This may take a while as it downloads its images.

You now have a working master node.

k8s_node role

This role configures the worker nodes. In this solution, there are two worker nodes but you can have any number.

The k8s_base configuration loads kubelet, which is the main Kubernetes service for all nodes. These will be used in this confguration.

This role will consist of:

  • tasks/main.yml

Create ansible/roles/k8_node/tasks/main.yml:

- name: Debian | Copy {{ k8s_join_file }} to server location
tags: k8s_node
copy:
src: "{{ k8s_token_file }}"
dest: "{{ k8s_user_home }}/{{ k8s_join_file }}.sh"
owner: "{{ k8s_user }}"
group: "{{ k8s_user }}"
mode: "0750"

- name: Debian | Join the node to cluster unless file {{ k8s_kubelet_config }} exists
tags: k8s_node
become: true
command: sh "{{ k8s_user_home }}/{{ k8s_join_file }}.sh"
args:
creates: "{{ k8s_kubelet_config }}"

The node is much simpler than the master. With kubelet already installed it only requires the node to be given the details it needs to connect to the cluster. This is held in the join file created by the master_node configuration and copied to the development machine.

You should now be able to configure the worker nodes using:

ansible-playbook k8s.yml --tags=k8s_node

Test the cluster

It is possible to connect into your cluster from your development machine but to keep things simple, I decided to access my cluster from within the master node. To check your cluster, ssh into the master node with:

ssh -i ~/.ssh/qq_rsa kates@<master ip address>

Once you are logged on, you can now enter:

kubectl get nodes 

This lists all the nodes in the cluster. You should see an output like this:

NAME STATUS ROLES AGE VERSION
qq-k8s-master Ready control-plane 50m v1.28.2
qq-k8s-node-1 Ready <none> 9m31s v1.28.2
qq-k8s-node-2 Ready <none> 9m31s v1.28.2

If this is what you see, congratulations! You now have the tools to create a 3 node Kubernetes cluster using Terraform, Ansible and the Binary Lane cloud service provider!

Now we need to put your new cluster to work.

Summary

In this article we used Ansible to set up three nodes of a Kubernetes cluster. We used Ansible roles to define the different configurations required for the nodes. In addition to roles, this also demonstrated the use of triggered tasks, global variables, tags and more Ansible modules.

Now we have a working Kubernetes cluster, we can now put it to use.

Series Introduction

Previous — Configuring Servers using Ansible

Next — Creating a Persistant Volume

--

--