A step by step demo on Kubernetes cluster creation

Asish M Madhu
Geek Culture
Published in
13 min readAug 3, 2021

--

“Instead of worrying about what we cannot control, lets shift our focus on what we can create.”

Photo Credit: Annie Spratt | https://unsplash.com/photos/QckxruozjRg

Introduction

In this demo I will share my experience of creating a kubernetes cluster using kubeadm tool. The cluster will be setup using lxc machine containers. Will spin up one master and 3 worker nodes and form a kubernetes cluster. Lets go step by step and then automate the entire process.

My Lab setup

user1@k8s_cluster_demo_lab:~/lab/asish$uname -a
Linux lab 5.4.0-77-generic #86~18.04.1-Ubuntu SMP Fri Jun 18 01:23:22 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
user1@k8s_cluster_demo_lab:~/lab/asish$lsb_release -dr
Description: Ubuntu 18.04.5 LTS
Release: 18.04

About LXC

LinuX Containers (LXC) is often considered as something in the middle between a chroot and a full fledged virtual machine. It can also be compared between a virtual machine and a application container hosted by a container runtime. To get an insight on the comparison with regular application container, refer this article — https://asishmm.medium.com/lxc-vs-docker-container-5699db209391

The host machine used in the lab is a Ubuntu server(18.04) with x86_64 architecture.

Confirm lxc is installed on your machine.

user1@k8s_cluster_demo_lab:~/lab/asish$dpkg -l | grep lxd
ii lxd 3.0.3-0ubuntu1~18.04.1 amd64 Container hypervisor based on LXC - daemon
ii lxd-client 3.0.3-0ubuntu1~18.04.1 amd64 Container hypervisor based on LXC - client

If lxc is not installed, follow LXC Get Started

Steps at a high level

Below are the steps you can follow for setting up a K8s cluster using lxc.

  1. Have the lxc setup ready for master and worker nodes.
  2. Install container runtime using docker.
  3. Install kubeadm and initialize master server.
  4. Pod network solution should be installed.
  5. Run the join command for the worker node to join the master.

Step 1

Start the lxd container and initialize it. Add the user to lxd group to perform lxc actions without sudo.

user1@k8s_cluster_demo_lab:~$sudo systemctl start lxd
[sudo] password for user1:
user1@k8s_cluster_demo_lab:~$sudo usermod -a -G lxd user1
user1@k8s_cluster_demo_lab:~$getent group lxd
lxd:x:130:user1
user1@k8s_cluster_demo_lab:~$lxc version
Client version: 3.0.3
Server version: 3.0.3

Initialize LXD. Choose the default options. For storage backend I am selecting “dir” instead of the default “bitrfs”

user1@k8s_cluster_demo_lab:~$lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]: dir
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

Lxd comes with a default profile. Profiles are instance specific configurations which are used while creating an instance. Lets create a custom profile for the machine containers we are creating for the K8s cluster.

user1@k8s_cluster_demo_lab:~$lxc profile copy default k8s
user1@k8s_cluster_demo_lab:~$lxc profile ls
+---------+---------+
| NAME | USED BY |
+---------+---------+
| default | 0 |
+---------+---------+
| k8s | 0 |
+---------+---------+

I am going to use below custom profile and apply it to k8s profile.

user1@k8s_cluster_demo_lab:~/lab/asish$cat k8s-profile-config
config:
limits.cpu: "2"
limits.memory: 2GB
limits.memory.swap: "false"
linux.kernel_modules: ip_tables,ip6_tables,nf_nat,overlay,br_netfilter
raw.lxc: "lxc.apparmor.profile=unconfined\nlxc.cap.drop= \nlxc.cgroup.devices.allow=a\nlxc.mount.auto=proc:rw
sys:rw"
security.nesting: "true"
security.privileged: "true"

description: Kubernetes LXD profile
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
name: k8s
used_by: []

The most important part to consider is “security.nesting” and “security.proviledged”. These 2 should be enabled for the machine container to run containers inside it.

I feel confortable with vim as the default editor and edit the k8s profile as below.

user1@k8s_cluster_demo_lab:~/lab/asish$export EDITOR=vim
user1@k8s_cluster_demo_lab:~/lab/asish$lxc profile edit k8s

Paste the contents of the profile inside the edit view.

Lets use Ubuntu as the lxc machine image in this demo for both master and worker nodes.

user1@k8s_cluster_demo_lab:~/lab/asish$lxc launch images:ubuntu/18.04 kmaster1 --profile k8s
Creating kmaster1
Starting kmaster1

We can create 3 worker nodes similarly.

user1@k8s_cluster_demo_lab:~/lab/asish$for i in {1..3} ; do lxc launch images:ubuntu/18.04 kworker${i} --profile k8s ; done
Creating kworker1
Starting kworker1
Creating kworker2
Starting kworker2
Creating kworker3
Starting kworker3

Make sure the lxc containers are up.

Now our lab is ready to install k8s cluster, with one master — “kmaster1” and 3 workers.

Step 2

Install caontainer runtime in master. Lets use docker as the container runtime.

[ Reference: https://docs.docker.com/engine/install/ubuntu/ ]

Lets login inside the master container.

user1@k8s_cluster_demo_lab:~/lab/asish$lxc exec kmaster1 bash
root@kmaster1:~#

[Note: We are going to execute below steps inside kmaster1 container.]

Remove older version of docker if installed.

sudo apt-get remove docker docker-engine docker.io containerd runc

Install the basic packages

apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release -y

Download docker gpg keys and add docker repo

root@kmaster1:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg  >/dev/null
root@kmaster1:~#
root@kmaster1:~# echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Now install docker-ce

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli -y

Configure the Docker daemon to use systemd for the management of the container’s cgroups.

root@kmaster1:~# sudo mkdir /etc/dockerroot@kmaster1:~# cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF

Restart docker and enable on reboot;

root@kmaster1:~# systemctl enable dockerroot@kmaster1:~# systemctl daemon-reloadroot@kmaster1:~# systemctl restart docker

Step 3

Lets install kubeadm, kubectl and kubelet. Check the br_netfilter module is loaded. If the module is not loaded, explicitly load using modeprobe.

root@kmaster1:~# cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

For iptables to correctly see the bridged traffic, ensure net.bridge-nf-call-iptables is set to 1 in sysctl config

root@kmaster1:~# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

Verify it;

root@kmaster1:~# sysctl --system
* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
kernel.kptr_restrict = 1
* Applying /etc/sysctl.d/10-link-restrictions.conf ...
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
kernel.sysrq = 176
* Applying /etc/sysctl.d/10-network-security.conf ...
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.tcp_syncookies = 1
* Applying /etc/sysctl.d/10-ptrace.conf ...
kernel.yama.ptrace_scope = 1
* Applying /etc/sysctl.d/10-zeropage.conf ...
vm.mmap_min_addr = 65536
* Applying /usr/lib/sysctl.d/50-default.conf ...
net.ipv4.conf.all.promote_secondaries = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

* Applying /etc/sysctl.conf ...

Now its time to install kubeadm, kubelet and kubectl.

sudo apt-get updatesudo apt-get install -y apt-transport-https ca-certificates curlsudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg > /dev/nullecho "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.listsudo apt-get updatesudo apt-get install -y kubelet kubeadm kubectl

After the above step is completed, add a kubelet extra flag to disable fail on swap and then restart kubelet

echo 'KUBELET_EXTRA_ARGS="--fail-swap-on=false"' > /etc/default/kubeletsystemctl restart kubelet

Now, there is a small hack we have to do to enable K8s v1.15+ in LXC.

apt install -qq -y net-tools
mknod /dev/kmsg c 1 11
echo 'mknod /dev/kmsg c 1 11' >> /etc/rc.local
chmod +x /etc/rc.local

Allow kubeadm to pull the required images

kubeadm config images pull >/dev/null 2>&1

Initialize the cluster using kubeadm

kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all

Complete output.

root@kmaster1:~# kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all
[init] Using Kubernetes version: v1.21.3
[preflight] Running pre-flight checks
[WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.4.0-80-generic
DOCKER_VERSION: 20.10.7
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-80-generic\n", err: exit status 1
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kmaster1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.38.156.125]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kmaster1 localhost] and IPs [10.38.156.125 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kmaster1 localhost] and IPs [10.38.156.125 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 28.003274 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kmaster1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node kmaster1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: yg4xfz.heffiwkqvfyyxr5t
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:kubeadm join 10.38.156.125:6443 --token yg4xfz.heffiwkqvfyyxr5t \
--discovery-token-ca-cert-hash sha256:1db2277f65f33cbf72a61f1ebfe9d506605f07f6cf7690b0089d9d3debcee812

Note down the join command which has the token hash for worker nodes to join. Copy the kube config inside home directory and check kubectl commands are working.

root@kmaster1:~# mkdir ~/.kube
root@kmaster1:~# cp /etc/kubernetes/admin.conf ~/.kube/config
root@kmaster1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster1 Ready control-plane,master 10m v1.21.3
root@kmaster1:~# kubectl cluster-info
Kubernetes control plane is running at https://10.38.156.125:6443
CoreDNS is running at https://10.38.156.125:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Step 4

Now it is time to install flannel network provider for k8s

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml > /dev/null 2>&1

Output of the above command

root@kmaster1:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

Step 5

Lets bootstrap worker nodes and join them to master. In the previous step, we have copied the join command for the worker nodes to connect.

kubeadm join 10.38.156.125:6443 --token yg4xfz.heffiwkqvfyyxr5t \
--discovery-token-ca-cert-hash sha256:1db2277f65f33cbf72a61f1ebfe9d506605f07f6cf7690b0089d9d3debcee812

Follow the same steps as master for setting up docker and kubeadm, kubelet and kubectl installation. Below are the steps, you can copy and run it as a bash script.

sudo apt-get remove docker docker-engine docker.io containerd runcsudo apt-get updatesudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg >/dev/nullecho "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get updatesudo apt-get install docker-ce docker-ce-cli containerd.io -ysudo mkdir /etc/dockercat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
systemctl enable dockersystemctl daemon-reloadsystemctl restart dockercat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --systemsudo apt-get updatesudo apt-get install -y apt-transport-https ca-certificates curlsudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg > /dev/nullecho "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.listsudo apt-get updatesudo apt-get install -y kubelet kubeadm kubectlecho 'KUBELET_EXTRA_ARGS="--fail-swap-on=false"' > /etc/default/kubeletsystemctl restart kubeletapt install -qq -y net-toolsmknod /dev/kmsg c 1 11echo 'mknod /dev/kmsg c 1 11' >> /etc/rc.localchmod +x /etc/rc.localkubeadm join 10.38.156.125:6443 --token yg4xfz.heffiwkqvfyyxr5t \
--discovery-token-ca-cert-hash sha256:1db2277f65f33cbf72a61f1ebfe9d506605f07f6cf7690b0089d9d3debcee812

[Note: Change the kubeadm join command appropriately from the master container. ]

I will use these steps as a bash script ( bootstrap.sh ) and execute them directly on the worker nodes as below. This is run from the host machine.

cat bootstrap.sh | lxc exec kworker1 bashcat bootstrap.sh | lxc exec kworker2 bashcat bootstrap.sh | lxc exec kworker3 bash

Note:

I faced the warning below while bringing up the kubelet. (Ignore if not found)

[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: “configs”, output: “modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0–77-generic\n”, err: exit status 1

This went away after installing linux-image

apt-get install linux-image-$(uname -r)

Let’s verify the cluster and run a simple nginx application in it.

root@kmaster1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster1 Ready control-plane,master 133m v1.21.3
kworker1 Ready <none> 27m v1.21.3
kworker2 Ready <none> 13m v1.21.3
kworker3 Ready <none> 6m17s v1.21.3

Create a sample nginx app and service in this cluster.

root@kmaster1:~# kubectl create deploy --image=nginx nginx
deployment.apps/nginx created

root@kmaster1:~# kubectl expose deploy nginx --type=NodePort --name nginx --port 80
service/nginx exposed
root@kmaster1:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 139m
nginx NodePort 10.111.136.18 <none> 80:31273/TCP 4s
root@kmaster1:~# ip a show dev eth0
15: eth0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:11:2e:72 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.38.156.125/24 brd 10.38.156.255 scope global dynamic eth0
valid_lft 2583sec preferred_lft 2583sec
inet6 fd42:45a3:f671:9991:216:3eff:fe11:2e72/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 3190sec preferred_lft 3190sec
inet6 fe80::216:3eff:fe11:2e72/64 scope link
valid_lft forever preferred_lft forever

root@kmaster1:~# curl http://10.38.156.125:31273
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@kmaster1:~#

We can tweak the bash script to have a condition to execute the extra steps included in master by verifying the hostnames as below. In that case, name the containers while creating appropriately.

if [[ $(hostname) =~ .*master.* ]]
then
kubeadm config images pull >/dev/null 2>&1kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=allmkdir /root/.kube
cp /etc/kubernetes/admin.conf /root/.kube/config
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml > /dev/null 2>&1joinCommand=$(kubeadm token create --print-join-command 2>/dev/null)
echo "$joinCommand --ignore-preflight-errors=all"
fiif [[ $(hostname) =~ .*worker.* ]]
then
kubeadm join 10.38.156.125:6443 --token yg4xfz.heffiwkqvfyyxr5t \
--discovery-token-ca-cert-hash sha256:1db2277f65f33cbf72a61f1ebfe9d506605f07f6cf7690b0089d9d3debcee812 --ignore-preflight-errors=all

Future Scope

We can add cluster creation automation after the master and worker node infrastructure is created. In the demo, we used kubeadm to deploy on LXC backed infrastructure. But in real production environment, tools like Terraform and Ansible could be used to build the infrastructure on a physical machine, VM or a cloud resource. Once the infrastructure is setup, we can trigger the cluster creation automation through ansible or some other automation. Currently there are Kubernetes providers available in terraform which can do the task of cluster creation for you and even deploy the application pod after the cluster is created.

Conclusion

We covered step by step procedure to build a kubernetes cluster using kubeadm tool. We used lxc to quickly setup the cluster. Finally we put the steps together in a bash script, which can be used to automate the process and deployed a sample nginx service. Thanks for the read and hope this article will add some value for you while setting up a kubernetes cluster.

--

--

Asish M Madhu
Geek Culture

I enjoy exploring various opensource tools/technologies/ideas related to CloudComputing, DevOps, SRE and share my experience, understanding on the subject.