kubelet + kube-apiserver + kubectl + scheduler + kube-controller-manager

Elie
4 min readSep 20, 2022

--

This post is part of run Kubernetes components one by one series.

Now, let’s introduce Controller manager into our picture, the Kubernetes controller manager is a daemon that embeds the core control loops shipped with Kubernetes. In applications of robotics and automation, a control loop is a non-terminating loop that regulates the state of the system. In Kubernetes, a controller is a control loop the watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards to the desired state.

For example, the most well-known controller probably is the Replication Controller, it is responsible for ensuring that the specified number of pod replicas are running at any point in time(Anyway, Replication Controller is being replaced by Replica Sets which is the next-generation Replication Controller)

There are some other different types of controllers, e.g.

  • Node controller: Responsible for noticing and responding when nodes go down.
  • Job controller: Watches for Job objects that represent one-off tasks, then creates Pods to run those tasks to completion.
  • Endpoints controller: Populates the Endpoints object (that is, joins Services & Pods).
  • Service Account & Token controllers: Create default accounts and API access tokens for new namespaces.

Remember when we first create the pod by sending request to kube apiserver, it failed with below error, and we fixed it by disabling the serviceAccount admission plugin when starting kube-apiserver, so it is clear now that default service account is created by serverAccount controller, so without the controller running in our environment, no surprisingly, pod creation will fail.

$ curl -k -H "Content-Type: application/json" -H "Authorization: Bearer kubeapiserverdummytoken" -X POST https://127.0.0.1:6443/api/v1/namespaces/default/pods --data @nginx.json
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "pods \"nginx\" is forbidden: error looking up service account default/default: serviceaccount \"default\" not found",
"reason": "Forbidden",
"details": {
"name": "nginx",
"kind": "pods"
},
"code": 403
}

The main implementation of Controller manager is kube-controller-manager. Ok, let’ s download the kube-controller-manager from https://dl.k8s.io/v1.25.0/bin/linux/amd64/kube-controller-manager

$ cd /home/opc/k8s
$ wget https://dl.k8s.io/v1.25.0/bin/linux/amd64/kube-controller-manager
$ chmod +x ./kube-controller-manager

Since kube controller manager also needs to talk to apiserver, so need to pass the kubeconfig file to it for startup as well.

$ ./kube-controller-manager --kubeconfig=/home/opc/k8s/configs/kubelet.kubeconfig

Let’s check whether the default service account is created by kube controller manager

# kubectl get sa
NAME SECRETS AGE
default 0 20s

Great, it is expected, so if we remove the ServiceAccount admission plugin and restart apiserver again, new pod should be created successfully, let’s try

$ ./kube-apiserver --etcd-servers=http://127.0.0.1:2379 --service-cluster-ip-range=10.0.0.0/16 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-signing-key-file=/home/opc/k8s/certs/service-account-key.pem --service-account-key-file=/home/opc/k8s/certs/service-account-pub.pem --token-auth-file=/home/opc/k8s/token_auth_file
$ kubectl apply -f nginx.yaml
pod/nginx-with-serviceaccount-admission-plugin created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 2/2 Running 0 10d
nginx-with-serviceaccount-admission-plugin 2/2 Running 0 5m4s

Great, with default service account created by Service Account Controller, you probably have a query, we just installed kube controller manager, where Service Account Controller comes in? Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process, in other words, kube controller manager contains all controllers.

OK, since we do have controller manager running now, we can deploy a Deployment instead of Pod directly to see how ReplicaSet(as we mentioned early, ReplicaSets is the next-generation of Replication Controller) works.

Let’s convert the Pod manifest to a Deployment manifest

$ cat nginx-deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/log/nginx
name: nginx-logs
- name: log-truncator
image: busybox
command:
- /bin/sh
args: [-c, 'while true; do cat /dev/null > /logdir/access.log; sleep 10; done']
volumeMounts:
- mountPath: /logdir
name: nginx-logs
volumes:
- name: nginx-logs
emptyDir: {}

Let’s create it

$ kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx created
## ReplicaSet
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-5b756cdf7 2 0 0 3s
# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-5b756cdf7-kbhbs 2/2 Running 0 89s
nginx-5b756cdf7-tnp4w 2/2 Running 0 89s
[root@instance-20220803-1159 k8s]#

As we can see, Deployment creates ReplicaSet, and ReplicaSet ensure the desired number of pod is created.

Let’s try to remove one pod, and you see, a new pod is created immediately to ensure the replicas number defined in Deployment is met.

$ kubectl delete pod nginx-5b756cdf7-kbhbs
pod "nginx-5b756cdf7-kbhbs" deleted
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-5b756cdf7-grrsr 0/2 ContainerCreating 0 8s
nginx-5b756cdf7-kbhbs 2/2 Terminating 0 3m27s
nginx-5b756cdf7-tnp4w 2/2 Running 0 3m27s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-5b756cdf7-grrsr 2/2 Running 0 48s
nginx-5b756cdf7-tnp4w 2/2 Running 0 4m7s

OK, we have successfully added kube controller manager into our cluster, basically, controller is a loop that check the cluster state through api server, and then do the needed actions to ensure the current state towards to the desired state.

We almost installed most of the Control Plane components in our environment, just the last one kube-proxy is missing, we will explore it in next post.

Thanks for reading.

Check other posts of this series on Kubernetes 1.24+ components one by one series | by Elie | Medium

--

--