Rancher for Devs — Part 1

Luy Lucas
3 min readJun 2, 2024

In this series, I’ll show you how to build your kubernetes environment for local development, to test your solutions, learn about k8s, CI/CD and all of this stuffs. Follow me!

Hi everyone, after a lot of time, here we are again for another tutorial. Kubernetes has blown my mind.

As a developer, I used docker for my local environment, to learn/deploy microservices, cloud-native apps, creating databases, smtp servers, oidc apps, using docker/host network, volumes etc. It works as it has to do, but every new version of my apps, I have to create new image, stop the older one, start the new one, every task has to be manual, and we have tools to automate it.

Well, I could use cloud services, using git(hub/lab) CI/CD, existing container registry, cloud dns, but I almost sure that it has its costs to create.

You can say: we can use minikube, KinD, rancher desktop, even docker desktop (with k8s active) or similar tool, will do the same, so why make it more complex? I like challenges!

My idea is learn and teach how we can build a “nearly production” infrastructure and use this for our needs. Lets start.

First prepare an Ubuntu server vm with 4 cpu, 4GB ram, 100GB disk, with bridge network, setting a static ip for it in your router (I recommend because we’ll can set a name for this machine, like rancher.mydomain.com in hosts file). You’ll need root access. I’m using Ubuntu 24.04.

For this I used RKE2 docs (https://docs.rke2.io/), Rancher Docs (https://ranchermanager.docs.rancher.com/) and this article (https://ranchergovernment.com/blog/article-simple-rke2-longhorn-and-rancher-install).

In your vm, enter sudo mode

sudo su

We’ll prepare our vm:

systemctl stop ufw # stop the software firewall
systemctl disable ufw # disable the software firewall
apt update && apt upgrade -y # get updates, install nfs, and apply
apt install nfs-common -y # install nfs
apt autoremove -y # clean up

Now, install and enable RKE2

curl -sfL https://get.rke2.io | sh — # install rke2
systemctl enable rke2-server.service # enable rke2
systemctl start rke2-server.service # start rke2

It‘ll take some time to start, you can open another terminal and follow the logs:

tail -f /var/lib/rancher/rke2/agent/containerd/containerd.log # terminal 2
tail -f /var/lib/rancher/rke2/agent/logs/kubelet.log # terminal 3

Rancher brings kubectl, and we just need to create a symlink

ln -s $(find /var/lib/rancher/rke2/data/ -name kubectl) /usr/local/bin/kubectl

to connect, we need the kubeconfig file:

export KUBECONFIG=/etc/rancher/rke2/rke2.yaml # for current session
echo “KUBECONFIG=’/etc/rancher/rke2/rke2.yaml’” >> ~/.bashrc # for future sessions
source ~/.bashrc

execute this to get cluster node

kubectl get nodes

you’ll have something like that:

NAME        STATUS ROLES                      AGE  VERSION
vm-rancher Ready control-plane,etcd,master 27h v1.28.10+rke2r1

Our cluster is ready to work, and first step is almost done.

Install helm:

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

Add rancher and jetstack/cert-manager helm repo:

helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
helm repo add jetstack https://charts.jetstack.io

First install cert-manager. Rancher use this to create self-sign certificate:

helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--set installCRDs=true

and finally, rancher server:

helm install rancher rancher-latest/rancher \
--namespace cattle-system \
--create-namespace \
--set hostname=rancher.mydomain.com \
--set bootstrapPassword=admin

It takes ~10 minutes to be ready, depends on vm disk and memory (In SATA disk, 7200rpm, it takes 40 minutes to be ready, in nvme it takes less than 10 minutes).

In your machine, put machine ip and name for access:

192.168.X.X       rancher.mydomain.com

Access in your browser and use the bootstrapPassword. Rancher will ask for/generate a new password, save that.

You have a kubernetes cluster and rancher server for visual interaction.

In next part, we’ll start to prepare our K8s cluster for workloads.

Cya.

--

--