How to create Kubernetes Cluster using Kubespray on Google Compute Instances

abbabe
5 min readJul 9, 2023

--

Greetings, everyone! It’s great to be back with fresh training tutorial.

You can download all terraform files from github repo.

Today we’ll create K8s cluster using Kubespray. So we need to infrastructure on GCP.

First of all, we’ll create instances using Terraform. Then to configure our cluster using Ansible Controlplane. You can find details on these links https://kubespray.io/#/ and https://github.com/kubernetes-sigs/kubespray

***

1- USING TERRAFORM FOR INFRASTRUCTURE

In this project I used terraform to create our infrastructure below.

As shown in the picture, we need 4 google compute instances. One of them will have ansible installed on it. We will connect to this virtual machine that is named controlplane via VSCode and configure other vm’s with ansible.

We will also define some necessary firewall rules for master and worker nodes in the terraform file. During the installation with Ansible, we will also need a NAT Gateway to update the vm’s during the installation. And in order to install ansible and some necessary pip packages to our controlplane, we need to assign external ip. You can also do this manually, but I have automated it here. With “remote-exec” we will also install the packages that will be required for kubespray while creating the vm.

  # ANSIBLE CONTROLPLANE - main.tf
...
inline = [
"sudo apt-get update -y",
"sudo apt install python3-pip -y",
"git clone https://github.com/kubernetes-sigs/kubespray.git ",
"cd kubespray && git checkout release-2.22",
"sudo pip install -r ~/kubespray/requirements-2.12.txt",
]
}

In addition, since ssh public key must also be copied for access to nodes during installation with ansible, so we do this with terraform.

 metadata = {
sshKeys = "${var.gce_ssh_user}:${file(var.gce_ssh_pub_key_file)}"
}

You can edit the variables in auto.tfvars in terraform files according to your own GCP account.

region = "us-east1"
zone = "us-east1-b"
project = "<YOUR PROJECT ID>" # !!
machine_type = "e2-standard-2"
image = "ubuntu-os-cloud/ubuntu-2004-lts"
gce_ssh_user = "<YOUR SHH USER NAME>" # !!
gce_ssh_pub_key_file ="~/.ssh/abbabe.pub" # Public key location in PC
gce_ssh_pv_key_file = "~/.ssh/abbabe" # Private key location
gce_service_account = "~/.ssh/named-territory-35.json" # credentials location

tags = ["worker1", "worker2"]
num = 2

Usefull links to create shh key ,GCP credentials and connection with VSCode

Link-1 SSH into Remote VM with VS Code

Link-2 How to create a Google Cloud Service Account

If everything is in order, you are good to go with executing the following commands:

terraform init
terraform plan -no-color
terraform apply -no-color

In a few minutes, our infrastructure will be ready.

After installation, we connect to the controlplane and copy the private key that will be required to access other nodes.


# Copy a Local File to a Remote System with the scp Command #

scp -i <"your private key" > <"private key file"> abbabe@35.184.82.4:/home/abbabe

Now we will make a shh connection to the nodes via controlplane and check whether there is access.

ssh -i <private key> abbabe@10.250.0.3 # master1 node

2- Editing Kubespray inventory folder and setup kubernetes cluster

Now we can move on to kubernetes cluster setup. For this, we will follow the steps in order and complete the installation.

a- # Copy ``inventory/sample`` as ``inventory/dev``

cp -rfp inventory/sample inventory/dev

b- Update Ansible inventory file with inventory builder

# master node ip 10.250.0.3
# worker1 ip 10.250.0.2
# worker2 ip 10.250.0.5


declare -a IPS=(10.250.0.3 10.250.0.2 10.250.0.5)
CONFIG_FILE=inventory/dev/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

Then we edit hosts.yaml file. We’ll change node names.

all:
hosts:
master1:
ansible_host: 10.250.0.3
ip: 10.250.0.3
access_ip: 10.250.0.3
worker1:
ansible_host: 10.250.0.2
ip: 10.250.0.2
access_ip: 10.250.0.2
worker2:
ansible_host: 10.250.0.5
ip: 10.250.0.5
access_ip: 10.250.0.5
children:
kube_control_plane:
hosts:
master1:

kube_node:
hosts:
worker1:
worker2:

etcd:
hosts:
master1:

k8s_cluster:
children:
kube_control_plane:
kube_node:
calico_rr:
hosts: {}

To activate the Helm installation, the field inside the k8s_cluster/addons.yml is edited.

# Helm deployment

helm_enabled: true

If you want to use “flannel” or other network plugin, you can also change inside the k8s_cluster/k8s-cluster.yml file.

kube_network_plugin: flannel

We will do the installation using our inventory hosts.yaml file. First of all, with this inventory file, we ping the nodes and provide access control.

ansible -i inventory/dev/hosts.yaml -m ping all --key-file "~/abbabe"


master1 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
worker1 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
worker2 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"

Finally we execute the following command and start the installation.

ansible-playbook -i inventory/dev/hosts.yaml --become --become-user=root cluster.yml --key-file "~/abbabe"

After about 20 minutes, our kubernetes cluster will be ready.

Follow the steps below on master1 node to execute Kubectl commands. After installation, you can establish ssh connection via the external ip we assigned for master1 node.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

That’s it !!

Your cluster is ready to use .

--

--