Setting Up a Kubernetes Cluster on Google Cloud with a Load Balancer, Three Master Nodes, and Three Worker Nodes
Kubernetes, often abbreviated as K8s, is an open-source platform designed for automating the deployment, scaling, and operation of application containers. In this article serves as a guide to setting up a Kubernetes cluster on Google Cloud with three master nodes and three worker nodes all gated by a load balancer.
Creating the Virtual Machines (VMs)
To create the VMs, begin by navigating to the VM instances tab of the Compute Engine section in the Google Cloud console. All seven VMs will be created with the e2-medium machine type. The will also all use the CentOS 9 boot disk image and have 50 GB of storage. The zone the VMs are located in does not matter as long as they are all located within the same zone to minimize latency when they communicate with each other. In this article, all the VMs were created in zone us-east1-b. Add an ssh public key to each of the machines because you will need to ssh into each of them to install some software.
We will first create the load balancer because it must be up and running before all of the masters and workers. Create a VM instance with the name load-balancer. In the networking section, edit the default network interface to add a network tag of load-balancer. Switch the primary internal IPv4 address from Ephemeral (Automatic) to Ephemeral (Custom) and set the custom ephemeral IP address to 10.142.0.10. Switch the external IPv4 address from Ephemeral to Reserve Static External IP Address.
Now we will create the three master VMs. Create three instances with the names master-1, master-2, and master-3. In the networking section, edit the default network interface to add a network tag of master. Switch the primary internal IPv4 address from Ephemeral (Automatic) to Ephemeral (Custom) and set the custom ephemeral IP address of master-1, 2, and 3 to 10.142.0.20, 10.142.0.21, and 10.142.0.22 respectively. Switch the external IPv4 address from Ephemeral to Reserve Static External IP Address.
Finally, we will create the three worker VMs. Create three instances with the names worker-1, worker-2, and worker-3. In the networking section, edit the default network interface to add a network tag of worker. Switch the primary internal IPv4 address from Ephemeral (Automatic) to Ephemeral (Custom) and set the custom ephemeral IP address of worker-1, 2, and 3 to 10.142.0.30, 10.142.0.31, and 10.142.0.32 respectively. Switch the external IPv4 address from Ephemeral to Reserve Static External IP Address.
Setting Up the VMs
Execute the following commands on all seven VM instances.
cat << EOF | sudo tee /etc/hosts
127.0.0.1 localhost
10.142.0.10 load-balancer
10.142.0.20 master-1
10.142.0.21 master-2
10.142.0.22 master-3
10.142.0.30 worker-1
10.142.0.31 worker-2
10.142.0.32 worker-3
EOF
sudo yum -y update
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo systemctl enable docker
sudo systemctl start docker
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
cat << EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
Setting Up the Load Balancer
Only execute the following commands on the load-balancer instance.
sudo mkdir /etc/nginx
cat << EOF > /tmp/nginx.conf
events { }
stream {
upstream stream_backend {
least_conn;
server 10.142.0.20:6443;
server 10.142.0.21:6443;
server 10.142.0.22:6443;
}
server {
listen 6443;
proxy_pass stream_backend;
proxy_timeout 5s;
proxy_connect_timeout 3;
}
}
EOF
sudo mv /tmp/nginx.conf /etc/nginx/nginx.conf
sudo docker run --restart always -d -v /etc/nginx/nginx.conf:/etc/nginx/nginx.conf:ro --name ha-proxy -p 6443:6443 nginx
Setting Up the Masters and Workers
Execute the following commands on all of the master and worker VMs.
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
cat << EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
EOF
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo yum install -y iproute-tc
sudo yum install -y wget telnet
wget https://kojihub.stream.centos.org/kojifiles/packages/libcgroup/0.41/19.el8/x86_64/libcgroup-0.41-19.el8.x86_64.rpm
sudo rpm -i libcgroup-0.41-19.el8.x86_64.rpm
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.8/cri-dockerd-0.3.8-3.el8.x86_64.rpm
sudo rpm -i cri-dockerd-0.3.8-3.el8.x86_64.rpm
sudo systemctl enable --now cri-docker.socket
sudo systemctl status cri-docker.socket
sudo kubeadm config images pull --cri-socket=unix:///var/run/cri-dockerd.sock
Setting Up the First Master Instance
Only execute the following command on the master-1 VM instance.
sudo kubeadm init --control-plane-endpoint 10.142.0.10:6443 --upload-certs --pod-network-cidr=192.168.0.0/16 --cri-socket=unix:///var/run/cri-dockerd.sock
This command’s output should include a section similar to one given below.
...
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 10.142.0.10:6443 --token <token> \
--discovery-token-ca-cert-hash sha256:<hash> \
--control-plane --certificate-key <key>
Please note that the certificate-key gives access to cluster sensitive data, keep it secret and safe!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.142.0.10:6443 --token <token> \
--discovery-token-ca-cert-hash sha256:<hash>
...
Copy the two kubeadm
commands into a text file because you will need them to set up the other master and worker instances. Additionally, add --cri-socket unix:///var/run/cri-dockerd.sock
to both of the kubeadm
commands or else you may encounter some errors when executing them.
Continue executing the following commands only on the master-1 instance.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/custom-resources.yaml
When you execute the following command you should not see any nodes aside from the one you have just setup on the master-1 instance because you have not set up any of the other nodes yet.
kubectl get nodes
Execute the following command and wait for a few seconds for all of the systems to become up and running.
watch kubectl get pods -n calico-system
Setting Up the Second and Third Master Instances
Only execute the following commands on the master-2 and master-3 instances.
Begin by executing the kubeadm join
command for joining control-plane nodes outputted by the kubeadm init
command from the previous section. Remember to add --cri-socket unix:///var/run/cri-dockerd.sock
to the kubeadm join
command or else you may encounter some errors. The command should match the format given below.
kubeadm join 10.142.0.10:6443 --token <token> \
--discovery-token-ca-cert-hash sha256:<hash> \
--control-plane --certificate-key <key> \
--cri-socket unix:///var/run/cri-dockerd.sock
Next, run the following commands.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Setting Up the Worker Instances
Execute the kubeadm join
command for joining worker nodes outputted by the kubeadm init
command from the Setting Up the First Master Instance section. Remember to add --cri-socket unix:///var/run/cri-dockerd.sock
to the kubeadm join
command or else you may encounter some errors. The command should match the format given below.
kubeadm join 10.142.0.10:6443 --token <token> \
--discovery-token-ca-cert-hash sha256:<hash> \
--cri-socket unix:///var/run/cri-dockerd.sock
Finishing Setting Up the Master Instances
Only execute the following commands on the master-1 instance.
kubectl label node worker-1 node-role.kubernetes.io/worker=worker
kubectl label node worker-2 node-role.kubernetes.io/worker=worker
kubectl label node worker-3 node-role.kubernetes.io/worker=worker
Now when you run the following command you should see all of the worker nodes with the appropriate label.
kubectl get nodes
Add the following lines to the file ~/.bashrc
using your preferred text editor.
alias k=kubectl # will already be pre-configured
export do="--dry-run=client -o yaml" # k get pod x $do
export now="--force --grace-period 0" # k delete pod x $now
Finally, add the following lines to the file ~/.vimrc
.
vi ~/.vimrc
set tabstop=2
set expandtab
set shiftwidth=2
Conclusion
You have now set up a Kubernetes cluster on Google Cloud that consists of three master nodes and three worker nodes all gated by a load balancer. You can now use this cluster for your personal projects.