How to setup Kubernetes cluster with RKE and Rancher

Uncle Blythe
The Investments Monster
8 min readJul 6, 2021

--

mini kubernetes cluster with rke and rancher

Requirements

  • 5 VMs or Physical server for mini cluster
    1 Master/Control plan (10.8.131.52)
    2 Worker node (10.8.131.53, 10.8.131.54)
    1 Cluster control (10.8.131.51)
    1 Rancher GUI (10.8.131.55)
  • Floating IP for load balancer (10.8.131.99)
  • CentOS 7 Operation system, 4GB+ of memory and swap diabled
  • /var or /var/lib/docker should have minimum of 8–16GB free to provide for docker images storage
  • Docker CE installed on each node
  • RKE and Rancher on cluster control workstations

Server preparation (all k8s nodes + cluster control)

note: # is root, $ local user

Docker installation
refer: Install Docker Engine on CentOS | Docker Documentation

# yum update –y 
# yum install -y yum-utils
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# yum install docker-ce docker-ce-cli containerd.io
# systemctl start docker
# systemctl enable docker

Disables SELinux

# setenforce 0  
# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Adding RKE user

# useradd rke 
# passwd rke (password rke)
# usermod –aG docker rke

Verifications — rke user should execute docker command

# su - rke
(rke) $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
(rke) $ dcoker version

Client: Docker Engine - Community
Version: 20.10.7
API version: 1.41
Go version: go1.13.15
Git commit: f0df350
Built: Wed Jun 2 11:58:10 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.7
API version: 1.41 (minimum version 1.12)
Go version: go1.13.15
Git commit: b0f5bc3
Built: Wed Jun 2 11:56:35 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.6
GitCommit: d71fcd7d8303cbf684402823e425e9dd2e99285d
runc:
Version: 1.0.0-rc95
GitCommit: b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
docker-init:
Version: 0.19.0
GitCommit: de40ad0

Generate SSH-Key for rke user (cluster control workstation only)

# su – rke 
(rke) $ ssh-keygen
// Press enters all questions

Copy to all k8s nodes

(rke) $ ssh-copy-id rke@10.8.131.52 
(rke) $ ssh-copy-id rke@10.8.131.53
(rke) $ ssh-copy-id rke@10.8.131.54

Kubernetes Cluster installation with RKE (cluster controll workstation only)

Installation rke tools
refer: GitHub — rancher/rke: Rancher Kubernetes Engine (RKE)

(rke) $ wget https://github.com/rancher/rke/releases/download/v1.2.9/rke_linux-amd64 -O rke
(rke) $ chmod +x rke

Preparation Cluster.yml file
to creating cluster with rke we need to define nodes role in cluster.yml file first. rke tools will looking on that file to create/remove cluster

(rke) $ vim cluster.yml
nodes:
- address: 10.8.131.92
user: rke
role:
- controlplane
- etcd
hostname_override: kor-devops52
- address: 10.8.131.93
user: rke
role:
- worker
hostname_override: kor-devops53
- address: 10.8.131.94
user: rke
role:
- worker
hostname_override: kor-devops54
kubernetes_version: v1.20.8-rancher1-1
network:
plugin: canal

Setup Kubernetes cluster by rke tool

(rke) $ ls 
rke cluster.yml
(rke) $ ./rke -d up
DEBU[0080] Cluster version [1.20.8-rancher1-1] needs to have kube-api audit log enabled
DEBU[0080] Enabling kube-api audit log for cluster version [v1.20.8-rancher1-1]
DEBU[0080] Host: 10.8.131.92 has role: controlplane
DEBU[0080] Host: 10.8.131.92 has role: etcd
DEBU[0080] Host: 10.8.131.93 has role: worker
DEBU[0080] Host: 10.8.131.94 has role: worker
INFO[0096] [addons] Executing deploy job rke-ingress-controller
DEBU[0096] Checking node list for node [kor-devops52], try #1
DEBU[0096] [k8s] waiting for job rke-ingress-controller-deploy-job to complete..
DEBU[0101] [k8s] Job rke-ingress-controller-deploy-job in namespace kube-system completed successfully
INFO[0101] [ingress] ingress controller nginx deployed successfully
INFO[0101] [addons] Setting up user addons
INFO[0101] [addons] no user addons defined
INFO[0101] Finished building Kubernetes cluster successfully

If something wrong or fatal, check on each nodes. Try to logging fail container or remove kubelet service.

Verifications, should have kube_config_cluster.yml created on that path

(rke) $ ls 
cluster.rkestate cluster.yml kube_config_cluster.yml rke

Insetallation kubectl to control cluster (cluster control workstation only)
refer: Install and Set Up kubectl on Linux | Kubernetes

note: this article use Redhat

(rke) $ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
(rke) $ yum install -y kubectl

Cluster Verifications

(rke) $ kubectl --kubeconfig $(pwd)/kube_config_cluster.yml get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kor-devops52 Ready controlplane,etcd 3m55s v1.20.8 10.8.131.92 <none> CentOS Linux 7 (Core) 3.10.0-957.5.1.el7.x86_64 docker://20.10.7
kor-devops53 Ready worker 3m53s v1.20.8 10.8.131.93 <none> CentOS Linux 7 (Core) 3.10.0-957.5.1.el7.x86_64 docker://20.10.7
kor-devops54 Ready worker 3m51s v1.20.8 10.8.131.94 <none> CentOS Linux 7 (Core) 3.10.0-1160.31.1.el7.x86_64 docker://20.10.7

Setup MetalLB for Cluster loadbalancer on layer2 with Floating IP
refer: MetalLB, bare metal load-balancer for Kubernetes (installation)

(rke) $ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/namespace.yaml
(rke) $ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml

MetalLB configuration.
refer: MetalLB, bare metal load-balancer for Kubernetes (configuration)

(rke) $ vim metallb.yml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.8.131.99-10.8.131.99

Apply metallb.yml configuration

(rke) $ kubectl --kubeconfig $(pwd)/kube_config_cluster.yml apply -f metallb.yml

Checking metallb service

(rke) $ kubectl --kubeconfig $(pwd)/kube_config_cluster.yml get pods –ANAMESPACE        NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx default-http-backend-6977475d9b-65xwh 1/1 Running 0 26m
ingress-nginx nginx-ingress-controller-4rpgr 1/1 Running 0 26m
ingress-nginx nginx-ingress-controller-p259n 1/1 Running 0 26m
kube-system calico-kube-controllers-7d5d95c8c9-j9zls 1/1 Running 0 26m
kube-system canal-9nctm 2/2 Running 0 26m
kube-system canal-bttgp 2/2 Running 0 26m
kube-system canal-sk79j 2/2 Running 0 26m
kube-system coredns-55b58f978-cpt8j 1/1 Running 0 26m
kube-system coredns-55b58f978-jxsn7 1/1 Running 0 24m
kube-system coredns-autoscaler-76f8869cc9-hvmxh 1/1 Running 0 26m
kube-system metrics-server-55fdd84cd4-cn25d 1/1 Running 0 26m
kube-system rke-coredns-addon-deploy-job-t7xmn 0/1 Completed 0 26m
kube-system rke-ingress-controller-deploy-job-d9fss 0/1 Completed 0 26m
kube-system rke-metrics-addon-deploy-job-jtsdp 0/1 Completed 0 26m
kube-system rke-network-plugin-deploy-job-hdtcp 0/1 Completed 0 26m
metallb-system controller-6b78bff7d9-rqz48 1/1 Running 0 15m
metallb-system speaker-jqgcr 1/1 Running 0 15m
metallb-system speaker-pxz4s 1/1 Running 0 15m

Deployment with simple applications

This step requires only kube_config_cluster.yml and kubectl. We can deploy an application anywhere. So that we can build auto CI/CD by setup new pipeline integrate with GitLab or any source code repository and then set up Gitlab Runner or Jenkins that can execute kubectl. Build image and ship to kubernetes cluster

Prepare deployment file (just demonstrate only)
3 sections :: Service, Deployment, HorizontalPodAutoscaler

(rke) $ vim deployment.ymlapiVersion: v1
kind: Service
metadata:
labels:
app: hellok8s
name: hellok8s
namespace: default
annotations:
metallb.universe.tf/allow-shared-ip: production-public-ip
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 80
selector:
app: hellok8s
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hellok8s
spec:
selector:
matchLabels:
app: hellok8s
replicas: 1
template:
metadata:
labels:
app: hellok8s
spec:
containers:
- name: hellok8s
image: nginx:stable-alpine
resources:
requests:
cpu: '20m'
ports:
- containerPort: 80
env:
- name: TZ
value: Asia/Bangkok
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: hellok8s
spec:
scaleTargetRef:
apiVersion: apps/v1beta2
kind: Deployment
name: hellok8s
minReplicas: 1
maxReplicas: 4
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80

Deploy to kubernetes

(rke)$ kubectl --kubeconfig $(pwd)/kube_config_cluster.yml apply -f deployment.yml
service/hellok8s created
deployment.apps/hellok8s created
horizontalpodautoscaler.autoscaling/hellok8s created

Checking an application service with kubectl

(rke)$ kubectl --kubeconfig $(pwd)/kube_config_cluster.yml get all
NAME READY STATUS RESTARTS AGE
pod/hellok8s-98fb9d67b-bdffn 1/1 Running 0 15s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hellok8s LoadBalancer 10.43.254.44 10.8.131.99 8080:30599/TCP 15s
service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 121m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hellok8s 0/1 1 0 15s
NAME DESIRED CURRENT READY AGE
replicaset.apps/hellok8s-98fb9d67b 1 1 0 15s
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/hellok8s Deployment/hellok8s <unknown>/80% 1 4 0 15s

Accessing an application with 10.8.131.99 (Floating-IP, External IP for Loadbalacer)

simple web application with nginx run on kubernetes

Setup Rancher GUI to control Kubernetes (optional)

we can setup rancher gui at 10.8.131.55 (another machine) or same machine with cluster control
refer: Rancher Docs: Installing Rancher on a Single Node Using Docker

# docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancherUnable to find image 'rancher/rancher:latest' locally
latest: Pulling from rancher/rancher
01bf7da0a88c: Pull complete
f3b4a5f15c7a: Pull complete
57ffbe87baa1: Pull complete
84d8cc09b701: Pull complete
afe9e2465e8b: Pull complete
9f3ee9b60694: Extracting [==========================> ] 16.38MB/31.44MB
c1bda8cc7988: Download complete
69d5b4da5a88: Download complete
f122c9c34424: Download complete
8caabfe0006c: Download complete
dfe218bc61c0: Download complete
65635bb5c3ee: Download complete
c8402093f450: Download complete
827338662b84: Download complete
61d88446a779: Download complete
320ea75f6b79: Download complete
433836c91ee4: Download complete
88b74081f66e: Download complete
e886376b2df3: Download complete

Docker check process

# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
36c79a66a100 rancher/rancher "entrypoint.sh" 35 seconds ago Up 10 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp adoring_sammet

If something wrong, please check the resource requirements on Rancher web or ignore installing k3s in rancher docker with rancher/rancher [options]

refer: Rancher Docs: Installation Requirements

Accessing the Rancher application

access rancher with https://10.8.131.55

Set Password for admin and select I want to create or manage multiple cluster

The rancher has provided a local cluster with k3s inside docker container. We can ignore it and import our cluster into rancher

Rancher clusters dashboard

Adding existing cluster

Put the cluster name

Import cluster with apply rancher service to cluster (on cluster control)
Back to cluster control — 10.8.131.51

note: we use insecure because we did not implement cacerts before.

(rke) $ curl --insecure -sfL https://kor-devops55/v3/import/bdqpvfv4n5c56sjctc5rglz4q4wczlxvn6dlr5l6466l7t2lk9v nrj.yaml | kubectl --kubeconfig $(pwd)/kube_config_cluster.yml apply -f -
New cluster in Rancher cluster dashboard

Check cluster nodes at our cluster

Kubernetes’s node from rke setting up

Discovery applications services

Press + button to scale out pod of application

access an application with entrypoint on dashboard

Happy invest on leaning, stay moon

--

--