Multi-Node Kubernetes cluster configuration from scratch || Part-2

Jagadeesh Thiruveedula
6 min readJun 27, 2020

--

This post explains how to make VM’s Master/Slaves and make those to work together in master slave/worker mode.

In continuation to Part-1 in this part-2 we will be configuring master slave architecture and using cloned VM’s

Our network architecture will in similar pattern that we can see in below image.

Start all three VM’s and we will join those to master one by one.

For identification purpose we are just renaming our hostname here if you also want to rename then use below command to rename.

hostnamectl set-hostname slave1 # this command will change our host-name
exec bash #for reflecting changed host-name with out restarting system

do this for master and both slaves

Configuring connectivity between Master/Slaves:

for making connectivity between master and slaves with hostname we’ve to update below file in all 3VM’s

vim /etc/hosts
192.168.1.110 master #use your system ip
192.168.1.109 slave1
192.168.1.104 slave2

after updating hosts file try ping from one VM to other VM if connectivity is fine then we will get output like this.

Initializing Kubeadm:

After successfully connectivity we can initialize kubeadm in master and give ip range so that pods will have ip’s in that range.

kubeadm init --pod-network-cidr=10.10.1.0/16

this command will initialize master node and behind the scene they will download necessary docker images and you can verify those..

Note: please make note of the kubeadm join output we can see when control plane getting executed as in below pattern

kubeadm join 192.168.1.110:6443 — token etyrwc.507wplbrv3po0z5r \
— discovery-token-ca-cert-hash sha256:152gshgad31b21d69bb4d089c4e6bf04df65ccb18fbfe6928270fde350
docker images

Allowing master to run as slave:

Due to minimal infrastructure if you want your master to work as slave to balance load then run below command to make master as slave.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -g):$(id -g) $HOME/.kube/config

Check whether master is ready or not using below statement

kubectl get nodes

We can see status is not ready this usually occurs because of overlay network issue to overcome this we’ve to install flannel plugin .

We’ve other plugins as well to configure this flannel is preferable one.

https://github.com/coreos/flannel

Configuring Flannel:

There is official yml file configuring flannel in master so we are using that and pasted below.

https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

this will enable coredns we can see that status using below command.

kubectl get pods -n kube-system

when status has changed from pending to running then we can check the status of master.

kubectl get nodes

Now it is ready to allow slave’s to join.

Joining Slave to Master Node:

Run noted command of kubeadm join in both the slave’s.

kubeadm join 192.168.1.110:6443 — token etyrwc.507wplbrv3po0z5r \
— discovery-token-ca-cert-hash sha256:152gshgad31b21d69bb4d089c4e6bf04df65ccb18fbfe6928270fde350

go to master and check if there is any slave joined.

slave1 has joined

here we can see both slaves has been joined with master and ready to work.

Configuring Client:

If you don’t have kubectl installed please install using below link.

For making your local system as client you can copy config file from master node to local.

copying file from server to local → winscp

our config file available at location .kube/config

copying config file to windows local.

Firing queries from Client to server:

If we are firing any queries to server they were being handled my master node and allocate resources respectively.

Use below commands to test from kubernetes setup from client.

kubectl get ns --kubeconfig config #config is file copied from master to local 
#while executing this command you should be in location where your config file is there

here I’ve launched httpd pod and we can see master allocated it for slave1.

using below command you can where in which node our pod is running.

kubectl get pods -o wide — kubeconfig config

Our kubernetes master slave has been configured successfully.

Testing load balance:

I will test load balance quickly using replication set.

Load balancing will be taken care by scheduler in k8's

I’ve created replica set with replication factor as 3.

after running this scheduler can schedule these pods to nodes based on current load.

apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myweb-rs
spec:
replicas: 3
selector:
matchLabels:
env: dev
dc: IN
app: webser
template:
metadata:
name: myweb-pod
labels:
env: dev
dc: IN
app: webser
spec:
containers:
- name: myweb-con
image: httpd

for launching replication set use below command.

kubectl apply -f rs.yml --kubeconfig config

after replica set creation we can see each node has how many pods allocated.

In above output for slave1 it 2pods were being allocated and for slave2 1-pod is being allocated.

Proxy Service:

Great thing in K8’s is Proxy service and it could be used to expose ports.

Whenever you run expose command to open port for outside world then this service will come into picture and open respective port in all nodes.

we can see demo quickly below.

exposing ports
checking kube-proxy service
checking opened port in slave1
checking opened port in slave2
slave1
slave2

In above snips we are able to see that by proxy service we are able to connect to both slaves.

Quick Summary for this post :

→Configuring Host name for all nodes

→Initializing kubeadm

→Configured network plugin Flannel

→Joined Slave nodes with master as workers

→Client configuration

→Kubescheduler demo

→kube-proxy demo

Thanks buddy for spending your valuable time in reading this article, hope this helps you and benefits you.

If you guys like this article give kudo’s

--

--