kubelet + API Server + ETCD + kubectl + Scheduler + kube controller-manger + kube-proxy

Elie
5 min readDec 22, 2022

--

This post is part of run Kubernetes components one by one series.

kube-proxy is a network proxy that runs on each node in the cluster, implementing part of the Kubernetes Service concept. It maintains network rules on nodes, these network rules allow network communication to your pods from network session inside or outside of your cluster.

As we do for other components, let’s download it from https://dl.k8s.io/v1.25.1/bin/linux/amd64/kube-proxy

$ wget https://dl.k8s.io/v1.25.1/bin/linux/amd64/kube-proxy
$ chmod +x ./kube-proxy

I am having a tricky issue now, as mentioned in previous blog, the spec of Oracle Cloud free VM is very low, so it cannot support run all K8s components now, as we can see below, the dockerd process is consuming lots of resources, so I am going to free some resources by using containerd as the container runtime instead , this will get rid of Docker engine.

top - 09:13:54 up 139 days,  5:12,  2 users,  load average: 9.64, 9.79, 17.42
Tasks: 249 total, 2 running, 247 sleeping, 0 stopped, 0 zombie
%Cpu(s): 4.4 us, 9.9 sy, 0.0 ni, 1.3 id, 15.4 wa, 0.0 hi, 1.2 si, 67.9 st
MiB Mem : 682.3 total, 114.6 free, 447.4 used, 120.3 buff/cache
MiB Swap: 1364.0 total, 655.8 free, 708.2 used. 122.7 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
294483 root 20 0 2071436 60380 5284 S 23.8 8.6 14347:51 dockerd
93 root 20 0 0 0 0 D 3.6 0.0 1350:40 kswapd0:0
2691041 root 20 0 1111736 71256 14068 S 2.0 10.2 9:14.70 kube-apiserver

Basically, Ok, let’s install the containerd following guide https://github.com/containerd/containerd/blob/main/docs/getting-started.md

## install containerd
$ wget https://github.com/containerd/containerd/releases/download/v1.6.14/containerd-1.6.14-linux-amd64.tar.gz
$ tar Cxvzf /usr/local/ containerd-1.6.14-linux-amd64.tar.gz
bin/
bin/containerd-stress
bin/containerd-shim
bin/containerd-shim-runc-v1
bin/containerd-shim-runc-v2
bin/containerd
$
$ wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
$ cp containerd.service /usr/local/lib/systemd/system/containerd.service
$ systemctl daemon-reload
$ systemctl enable --now containererd

## install runc
$ wget wget https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64
$ install -m 755 runc.amd64 /usr/local/sbin/runc
$ ls /usr/local/sbin
runc

$ cat /etc/containerd/config.toml
version = 2

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true

$ systemctl start containerd.service

Now, containerd is running, we need to update kubelet to use it, two things needed for the updating. 1> set the container runtime points to containerd(/run/containerd/containerd); 2> change the cgroupDriver to systemd in KubeletConfiguration object.

Ok, after the two changes, let’s restart kubelet

## change cgroupDriver: "systemd", otherwise, pod creation will fail
$ cat kubeletConfigFile.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
enableServer: false
staticPodPath: /home/opc/k8s/kubelet-static-pod
failSwapOn: false
readOnlyPort: 10250
cgroupDriver: "systemd"
podCIDR: 10.241.1.0/24 # podCIDR is the CIDR to use for pod IP addresses, only used in standalone mode. In cluster mode, this is obtained from the control plane
authentication:
anonymous:
enabled: true
webhook:
enabled: false
authorization:
mode: AlwaysAllow
$
## set --container-runtime-endpoint=unix:///run/containerd/containerd.sock
$ ./kubelet --config=/home/opc/k8s/configs/kubeletConfigFile.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/home/opc/k8s/configs/kubelet.kubeconfig

Since kube-proxy basically reflects the services defined in the cluster and manages the rules to load-balance requests to a service’s backend pods, so the first thing in my heard is what will be happened if we creating a service but without kube-proxy running.

Let’s create a service using type NodePort for the nginx deployment

$ cat nginx-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: default
spec:
type: NodePort
selector:
app: nginx
ports:
- name: nginx-service-port
protocol: TCP
port: 8080
targetPort: 80

$ kubectl apply -f nginx-service.yaml
service/nginx-service created
$
$ kubectl describe svc nginx-service
Name: nginx-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=nginx
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.0.243.24
IPs: 10.0.243.24
Port: nginx-service-port 8080/TCP
TargetPort: 80/TCP
NodePort: nginx-service-port 32039/TCP
Endpoints: 10.1.33.185:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-5985d86c9c-lqz25 2/2 Running 0 104s 10.1.33.185 instance-20220803-1159 <none> <none>
$

Ok, now the Endpoints of the nginx sevice is 10.1.33.182:80 which is exactly the backend pod.

Let’s see without kube-proxy, if the service can locate the backend pods successfully, we use curl to send the request to the NodePort of the nginx-service, no surprisingly, if failed as without kube-proxy, the cluster doesn’t know how to forward the request.

## 32039 is the NodePort of the nginx-service
$ curl http://127.0.0.1:32039
curl: (7) Failed to connect to 127.0.0.1 port 32039: Connection refused
$

Let’s start kube-proxy, kube-proxy also needs the kubeconfig file for startup, we can use the same one for kubelet startup.

$ ./kube-proxy --kubeconfig=/home/opc/k8s/configs/kubelet.kubeconfig

Ok, kube-proxy is running now, let’s recreate the service, and try the curl command again(note the new nginx-service uses a different nodePort 32342)

$ kubectl delete -f nginx-service.yaml
$ kubectl apply -f nginx-service.yaml
service/nginx-service created
$ kubectl describe svc nginx-service
Name: nginx-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=nginx
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.0.89.93
IPs: 10.0.89.93
Port: nginx-service-port 8080/TCP
TargetPort: 80/TCP
NodePort: nginx-service-port 32342/TCP
Endpoints: 10.1.33.185:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

## let's try the curl again
$ curl http://127.0.0.1:32342
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

So, with kube-proxy enabled in our environment, the service can locate the backend pod successfully, basically, kube-proxy is running in iptables proxy-node by default, it creates several iptable rules to achieve this, here is the all the rules to make nginx-service work.

$ iptables -t nat -L KUBE-SERVICES | column -t
Chain KUBE-SERVICES (2 references)
target prot opt source destination
KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- anywhere _gateway /* default/kubernetes:https cluster IP */ tcp dpt:https
KUBE-SVC-SNCKDHZOK725QNCM tcp -- anywhere 10.0.89.93 /* default/nginx-service:nginx-service-port cluster IP */ tcp dpt:webcache
KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL

$ iptables -t nat -L KUBE-NODEPORTS | column -t
Chain KUBE-NODEPORTS (1 references)
target prot opt source destination
KUBE-EXT-SNCKDHZOK725QNCM tcp -- anywhere anywhere /* default/nginx-service:nginx-service-port */ tcp dpt:32342

$ iptables -t nat -L KUBE-EXT-SNCKDHZOK725QNCM | column -t
Chain KUBE-EXT-SNCKDHZOK725QNCM (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- anywhere anywhere /* masquerade traffic for default/nginx-service:nginx-service-port external destinations */
KUBE-SVC-SNCKDHZOK725QNCM all -- anywhere anywhere

$ iptables -t nat -L KUBE-SVC-SNCKDHZOK725QNCM | column -t
Chain KUBE-SVC-SNCKDHZOK725QNCM (2 references)
target prot opt source destination
KUBE-SEP-ZYKNFON56XH2N7SZ all -- anywhere anywhere /* default/nginx-service:nginx-service-port -> 10.1.33.185:80 */

$ iptables -t nat -L KUBE-SEP-ZYKNFON56XH2N7SZ | column -t
Chain KUBE-SEP-ZYKNFON56XH2N7SZ (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.1.33.185 anywhere /* default/nginx-service:nginx-service-port */
DNAT tcp -- anywhere anywhere /* default/nginx-service:nginx-service-port */ tcp to:10.1.33.185:80

Ok, I think we can end this series now, we have covered all the essential components of a functionality Kubernetes cluster in this series, actually, compared with the whole Kubernetes ecosystem what we have learned are very basic, anyway, hope the series are useful to you.

Thanks for reading!

Keep learning, Keep growing!!!

Check other posts of this series on Kubernetes 1.24+ components one by one series | by Elie | Medium

--

--