kubelet + kube-apiserver + kubectl + scheduler

Elie
4 min readSep 9, 2022

--

This post is part of run Kubernetes components one by one series.

Compared with other posts, this will be a simple one, since adding kubectl and Scheduler to our existing environment is very straightforward.

The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.

Let’s do it.

In previous post, we use curl to send request to API Server, but curl isn’t a best option for talking to Kubernetes cluster, so we need kubectl, which allows you to run commands against Kubernetes clusters. We can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.

We can download kubectl from the same page as kubelet and API Server, https://dl.k8s.io/v1.25.0/bin/linux/386/kubectl

$ cd /home/opc/k8s
$ wget https://dl.k8s.io/v1.25.0/bin/linux/386/kubectl
$ chmod +x ./kubectl
## We can add the directory to PATH environment, so we can call kubectl later directly
$ grep PATH ~/.bashrc
PATH=$PATH:/home/opc/k8s/etcd/etcd-v3.4.20-linux-amd64:/home/opc/k8s

To make kubectl works fine, we need to tell it where is the cluster located, sounds very familiar right, yes, we mentioned previously kubelet uses a kubeconfig file to register the node to API Server, so following same pattern, this kubeconfig file can also be used by kubectl

We can set KUBECONFIG environment variable holds the kubeconfig file, if the KUBECONFIG environment variable doesn’t exist, then kubectl uses the default kubeconfig file $/HOME/.kube/config, let’s try both

## use environment variable
$export KUBECONFIG=/home/opc/k8s/configs/kubelet.kubeconfig
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 2/2 Running 0 106m
## After unset KUBECONFIG, kubectl not work
$ unset KUBECONFIG
$ kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
$
## Create the kubeconfig file in $HOME/.kube
$ mkdir -p $HOME/.kube
$ cp /home/opc/k8s/configs/kubelet.kubeconfig $HOME/.kube/config
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 2/2 Running 0 129m
$

OK, kubectl is working, so now, with kubectl comes into our picture, it looks like we are operating a real cluster, it is great!!!

Time to install Scheduler, that watches for newly created Pods that have no Node assigned. For every Pod that the scheduler discovers, the scheduler becomes responsible for finding the best Node for that Pod to run on.

Let’s delete the POD we created before, and remove the nodeName property from nginx.yaml

$ kubectl delete pod nginx
pod "nginx" deleted
## removed nodeName from nginx.yaml
$ grep nodeName nginx.yaml
## Try to create pod without Scheduler runinng
$ kubectl apply -f nginx.yaml
pod/nginx created
## The pod will pending forever, as no Scheduler to tell kubelet where to place the pod
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 0/2 Pending 0 15s

OK, follow same way to download Scheduler from https://dl.k8s.io/v1.25.0/bin/linux/amd64/kube-scheduler

$ cd /home/opc/k8s
$ wget https://dl.k8s.io/v1.25.0/bin/linux/amd64/kube-scheduler
$ chmod +x ./kube-scheduler

And again, we need to pass a kubeconfig file to Scheduler startup, we can use the same one for kubelet

## delete the pending pod first
$ kubectl delete pod nginx
pod "nginx" deleted
## start kube scheduler
$ ./kube-scheduler --kubeconfig=/home/opc/k8s/configs/kubelet.kubeconfig

Let’s create the pod without nodeName specified again.

$ kubectl apply -f nginx.yaml
pod/nginx created
## Pod still in pending state, describe shows the node has a taint enabled
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 0/2 Pending 0 15h
$ kubectl describe pod nginx
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 19m (x163 over 15h) default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

But the pod still in pending state, the event shows the node has a taint configured, I am not very sure why it is there, anyway, let’s remove the taint.

Before that, if you would like to know what is the taint concept, you can take a look at this page, basically, taint allows a node to repel a set of pods, it is configured at node level, the corresponding setting at pod level is Tolerations, which allow the Scheduler to schedule pods with matching taints.

$ kubectl get node
NAME STATUS ROLES AGE VERSION
instance-20220803-1159 Ready <none> 43h v1.25.0
$ kubectl describe node instance-20220803-1159 | grep Taint
Taints: node.kubernetes.io/not-ready:NoSchedule
$ kubectl taint nodes instance-20220803-1159 node.kubernetes.io/not-ready:NoSchedule-
node/instance-20220803-1159 untainted
$ kubectl describe node instance-20220803-1159 | grep Taint
Taints: <none>
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 2/2 Running 0 15h
$ kubectl describe pod nginx
...
Normal Scheduled 56s default-scheduler Successfully assigned default/nginx to instance-20220803-1159

POD is successfully landed on the node instance-20220803–1159, so Scheduler successfully comes to our picture now.

let’s recall what we have for now

  1. kubelet
  2. API Server
  3. etcd
  4. kubelet
  5. Container run time
  6. Scheduler

Based on the following Kubernetes Component Diagram, we are still missing some of them(e.g. Controller manager, kube-proxy) .

Let’s continue the journey!!!

Thanks for reading!!

Check other posts of this series on Kubernetes 1.24+ components one by one series | by Elie | Medium

--

--