Setting up a Kubernetes Playground on a Dedicated Server

Rado Salov
7 min readDec 9, 2023

--

Navigating the expansive realm of cloud-native application development, discovering a testing ground that harmonizes security and cost-effectiveness becomes an intriguing pursuit. Come along as we delve into the steps of installing and configuring a private Kubernetes cluster on a server equipped with a solitary IP and network interface, unveiling the possibilities for secure and budget-conscious experimentation.

Introduction

Why Kubernetes, you ask? For starters, its orchestration capabilities make it an ideal choice for managing multiple applications efficiently. The decision to go the private Kubernetes cluster route is twofold: security and cost-effectiveness. By keeping our experimentation within the confines of a private cluster, we not only fortify our applications against external threats but also keep costs in check.

In our scenario, I acquired a dedicated server with a Xeon E3–1230 processor — boasting 4 cores, 8 threads, 8GB of RAM, and a 1TB SATA storage configuration. This server, chosen for its specifications, became the centerpiece of my experimental setup.

However, a dedicated server with only one IP and network interface introduces its own set of challenges. How do we serve multiple applications in a Kubernetes environment with these limitations? Fear not, as we’ll unravel the technical intricacies step by step, focusing on practical solutions for your experimentation needs.

Embracing Simplicity: The Appeal of k0s for Streamlined Kubernetes Deployments

I opted for k0s as my Kubernetes distribution of choice due to its streamlined efficiency and simplicity. Specifically designed for low-memory environments, k0s excels in scenarios where resource constraints are a consideration, enabling the deployment of Kubernetes clusters even on servers with limited memory. Its unique feature of zero dependencies, distributed as a single binary, ensures compatibility with any operating system without the need for additional software packages or configurations.

The versatility of k0s is evident in its capability to run on a single server, making it an ideal solution for scenarios where a full-scale cluster may be impractical. Installation is a breeze, reducing complexity and allowing for the rapid bootstrapping of new Kubernetes clusters within minutes. This zero-friction approach eliminates barriers, enabling individuals with varying levels of expertise to effortlessly get started with Kubernetes.

Adding to its appeal is the fact that k0s is entirely free for both personal and commercial use, reinforcing its commitment to accessibility and affordability. With its source code available on GitHub under the Apache 2 license, k0s stands out as a cost-effective foundation for a wide range of Kubernetes projects. In essence, k0s offers a zero-friction, zero-dependency, and zero-cost solution, making it an attractive and straightforward choice for Kubernetes deployments of any scale.

Setting the Stage

My dedicated server is running Rocky Linux 9. Before proceeding, I made sure to update the system to ensure it is current.

dnf update

After that I installed k0s (make sure /usr/local/bin is in your $PATH)

curl -sSLf https://get.k0s.sh | sudo sh
chmod +x /usr/local/bin/k0s

Created a configuration file /etc/k0s/k0s.yaml

mkdir -p /etc/k0s
k0s config create > /etc/k0s/k0s.yaml

My configuration file looks like the following, where x.x.x.x is my server’s public IP. I also enabled the Bundled OpenEBS local path provisioner.

apiVersion: k0s.k0sproject.io/v1beta1
kind: ClusterConfig
metadata:
creationTimestamp: null
name: k0s
spec:
api:
address: x.x.x.x
k0sApiPort: 9443
port: 6443
sans:
- x.x.x.x
controllerManager: {}
extensions:
helm:
charts: []
concurrencyLevel: 5
repositories: []
storage:
create_default_storage_class: true
type: openebs_local_storage
installConfig:
users:
etcdUser: etcd
kineUser: kube-apiserver
konnectivityUser: konnectivity-server
kubeAPIserverUser: kube-apiserver
kubeSchedulerUser: kube-scheduler
konnectivity:
adminPort: 8133
agentPort: 8132
network:
calico: null
clusterDomain: cluster.local
dualStack: {}
kubeProxy:
iptables:
minSyncPeriod: 0s
syncPeriod: 0s
ipvs:
minSyncPeriod: 0s
syncPeriod: 0s
tcpFinTimeout: 0s
tcpTimeout: 0s
udpTimeout: 0s
metricsBindAddress: 0.0.0.0:10249
mode: iptables
kuberouter:
autoMTU: true
hairpin: Enabled
ipMasq: false
metricsPort: 8080
mtu: 0
peerRouterASNs: ""
peerRouterIPs: ""
nodeLocalLoadBalancing:
envoyProxy:
apiServerBindPort: 7443
konnectivityServerBindPort: 7132
type: EnvoyProxy
podCIDR: 10.244.0.0/16
provider: kuberouter
serviceCIDR: 10.96.0.0/12
scheduler: {}
storage:
etcd:
externalCluster: null
peerAddress: x.x.x.x
type: etcd
telemetry:
enabled: true

Install the k0s controller using the configuration file and start it

k0s install controller --single --enable-metrics-scraper --enable-worker -v -c /etc/k0s/k0s.yaml
k0s start

I also installed helm and kubectl on the host and configured them to work with my new cluster

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | sh

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
EOF

dnf install -y kubectl

mkdir -p ~/.kube
k0s kubeconfig admin > ~/.kube/config;

Overcoming the Singular IP Dilemma

To overcome the constraint of a singular IP, a multi-tiered approach has been adopted.

First to handle services of type LoadBalancer within the cluster I choose MetalLB. It is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols

helm repo add metallb https://metallb.github.io/metallb --force-update
helm upgrade -i --create-namespace -n metallb-system metallb metallb/metallb --version 0.13.12 --wait

kubectl -n metallb-system apply -f - <<EOF
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: my-pool
spec:
addresses:
- 192.168.100.0/26
avoidBuggyIPs: true
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: my-adv
EOF

Traefik Ingress is employed as a powerful edge router

helm repo add traefik https://traefik.github.io/charts --force-update
helm upgrade -i --create-namespace -n ingress-traefik traefik traefik/traefik --version 20.5.3 --wait

When it is installed and running you should check if it got IP from the load balancer range. Ideally it should be 192.168.100.1

kubectl -n ingress-traefik get service

# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# traefik LoadBalancer 10.104.44.171 192.168.100.1 80:31237/TCP,443:30971/TCP 2d

Cert-manager is integrated for the automated management of TLS certificates, enhancing the security of communication. Trust-manager complements this by ensuring the secure storage and retrieval of sensitive credentials.

helm repo add jetstack https://charts.jetstack.io --force-update
helm upgrade -i --create-namespace -n cert-manager cert-manager jetstack/cert-manager --version v1.13.2 --set installCRDs=true --set webhook.networkPolicy.enabled=true --set webhook.securePort=10260 --wait
helm upgrade -i -n cert-manager trust-manager jetstack/trust-manager --version v0.7.0 --wait

# Make sure you replace YOUR.EMAIL@EXAMPLE.COM below
kubectl -n cert-manager apply -f - <<EOF
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: YOUR.EMAIL@EXAMPLE.COM
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: traefik
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: YOUR.EMAIL@EXAMPLE.COM
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
class: traefik
EOF

If you need to issue self-signed certificates with your own CA you can do the following configuration

kubectl -n cert-manager create secret tls my-ca-secret --cert=path/to/ca.crt --key=path/to/ca.key


kubectl -n cert-manager apply -f - <<EOF
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: my-ca
spec:
ca:
secretName: my-ca-secret
EOF

In front of all that on the host machine, a Traefik reverse proxy is installed and configured to forward external traffic to the Traefik Ingress within the Kubernetes cluster

wget https://github.com/traefik/traefik/releases/download/v2.10.6/traefik_v2.10.6_linux_amd64.tar.gz
tar -zxvf traefik_v2.10.6_linux_amd64.tar.gz
install -o root -g root -m 0755 traefik /usr/local/bin/traefik

Give the traefik binary the ability to bind to privileged ports (e.g. 80, 443) as a non-root user

setcap 'cap_net_bind_service=+ep' /usr/local/bin/traefik

Set up the user, group, and directories that will be needed

groupadd -g 321 traefik

useradd \
-g traefik --no-user-group \
--home-dir /var/www --no-create-home \
--shell /usr/sbin/nologin \
--system --uid 321 traefik

mkdir -p /etc/traefik/acme
mkdir -p /etc/traefik/conf.d
mkdir -p /var/log/traefik

Create /etc/traefik/traefik.yaml file with the following contents

providers:
file:
directory: /etc/traefik/conf.d
watch: true
log:
filePath: "/var/log/traefik/traefik.log"
level: DEBUG
accessLog:
filePath: "/var/log/traefik/access.log"
bufferingSize: 100
entryPoints:
web:
address: ":80/tcp"
proxyProtocol:
trustedIPs:
- "127.0.0.1/32"
- "192.168.100.0/26"
websecure:
address: ":443/tcp"
proxyProtocol:
trustedIPs:
- "127.0.0.1/32"
- "192.168.100.0/26"
api:
dashboard: false
insecure: false
metrics:
prometheus:
entryPoint: metrics
addServicesLabels: true

Create /etc/traefik/conf.d/ingress.yaml file with the following contents

tcp:
routers:
proxy-http-router:
entryPoints:
- "web"
rule: "HostSNI(`*`)"
service: "ingress-http"
proxy-https-router:
entryPoints:
- "websecure"
rule: "HostSNI(`*`)"
service: "ingress-https"
tls:
passthrough: true
services:
ingress-http:
loadBalancer:
servers:
- address: "192.168.100.1:80"
ingress-https:
loadBalancer:
servers:
- address: "192.168.100.1:443"

Fix file and directory permissions

chown -R root:root /etc/traefik
chown -R traefik:traefik /etc/traefik/acme
chown -R traefik:traefik /etc/traefik/conf.d
chown -R traefik:traefik /var/log/traefik

Create /etc/systemd/system/traefik.service file with the following contents

[Unit]
Description=Traefik Proxy
After=network-online.target
Wants=network-online.target systemd-networkd-wait-online.service
Requires=k0scontroller.service

[Service]
Restart=on-abnormal
User=traefik
Group=traefik
ExecStart=/usr/local/bin/traefik
LimitNOFILE=1048576
PrivateTmp=true
PrivateDevices=false
ProtectHome=true
ProtectSystem=full
ReadWriteDirectories=-/etc/traefik/acme
ReadWriteDirectories=-/etc/traefik/conf.d
ReadWriteDirectories=-/var/log/traefik
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
AmbientCapabilities=CAP_NET_BIND_SERVICE
NoNewPrivileges=true

[Install]
WantedBy=multi-user.target

After that start and enable the service

chown root: /etc/systemd/system/traefik.service
chmod 644 /etc/systemd/system/traefik.service
systemctl daemon-reload
systemctl start traefik.service
systemctl enable traefik.service

Test the setup

You can test the setup pointing a domain to your public IP and create a Deployment, Service and Ingress for the whoami container. In this example we use whoami-test.com

kubectl create namespace whoami

kubectl -n whoami apply -f - <<EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami-deployment
spec:
replicas: 1
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami-container
image: traefik/whoami
---
apiVersion: v1
kind: Service
metadata:
name: whoami-service
spec:
ports:
- name: http
targetPort: 80
port: 80
selector:
app: whoami
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whoami
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- whoami-test.com
secretName: letsencrypt-prod
rules:
- host: whoami-test.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: whoami-service
port:
name: http

EOF

Now if you go to https://whoami-test.com you should see something like that:

Conclusion

In conclusion, your dedicated server with Rocky Linux 9 now stands transformed into a budget-friendly Kubernetes playground. Through the strategic use of k0s and thoughtful configuration, you’ve embarked on a journey of secure and cost-effective cloud-native experimentation. Let the testing begin!

References:

--

--

Rado Salov
Rado Salov

Written by Rado Salov

🚀 Passionate Software Architect | 20+ Years of Expertise | Problem Solver and Innovator

No responses yet