The ‘kubectl run’ command

A little Kubernetes teardown.

In the following I assume you’re a bit familiar with containers in general and Docker in special. That is, you at least know what docker run does.


So, have you ever thought about what is going on when you execute this following, rather innocent command?

$ kubectl run nginx --image=nginx --replicas=1

Let’s see it in action:

Now, as one would expect, there will be a container launched based on the nginx image:

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-nvcnl 1/1 Running 0 13s

Let’s have a look at this little fella:

 Name: nginx-nvcnl
Namespace: default
Image(s): nginx
Node: 10.0.3.254/10.0.3.254
Start Time: Wed, 09 Dec 2015 10:34:18 +0000
Labels: run=nginx
Status: Running
Reason:
Message:
IP: 172.17.0.8
Replication Controllers: nginx (1/1 replicas created)
Containers:
nginx:
Container ID: docker://c6ad6d6bb20ca7b4ae7e706c417d60a3f2b5fef8b0620f84d6ac478cafbe1776
Image: nginx
Image ID: docker://198a73cfd6864ec3d349cf8f146382cca9584a56c3b80f28b7318c9895fb0ae3
State: Running
Started: Wed, 09 Dec 2015 10:34:18 +0000
Ready: True
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready True
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
8m 8m 1 {scheduler } Scheduled Successfully assigned nginx-nvcnl to 10.0.3.254
8m 8m 1 {kubelet 10.0.3.254} implicitly required container POD Pulled Container image “gcr.io/google_containers/pause:0.8.0” already present on machine
8m 8m 1 {kubelet 10.0.3.254} implicitly required container POD Created Created with docker id 25370982dd29
8m 8m 1 {kubelet 10.0.3.254} implicitly required container POD Started Started with docker id 25370982dd29
8m 8m 1 {kubelet 10.0.3.254} spec.containers{nginx} Pulled Container image “nginx” already present on machine
8m 8m 1 {kubelet 10.0.3.254} spec.containers{nginx} Created Created with docker id c6ad6d6bb20c
8m 8m 1 {kubelet 10.0.3.254} spec.containers{nginx} Started Started with docker id c6ad6d6bb20c

That’s a lot going on here:

  • The scheduler picked a node to launch the container (see the bottom section called ‘Events’): Successfully assigned nginx-nvcnl to 10.0.3.254
  • The pod got set up: gcr.io/google_containers/pause:0.8.0 is the infrastructure container that holds the networking namespace, for example
  • The Kubelet figured that the nginx Docker image is already present on this node (otherwise it would have pulled it first from the registry)
  • And finally, the nginx container is launched

In addition to the pod, with kubectl run you also get a sort of guard for the pod, for free:

$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
nginx nginx nginx run=nginx 1 10m

The name of the guard, technically called a replication controller, is `nginx` — which happens to be the first argument of the run command.

$ kubectl describe rc nginx
Name: nginx
Namespace: default
Image(s): nginx
Selector: run=nginx
Labels: run=nginx
Replicas: 1 current / 1 desired
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
10m 10m 1 {replication-controller } SuccessfulCreate Created pod: nginx-nvcnl

But what happens if something bad happens to our little pod? Say, a stupid or malicious user does the following:

$ kubectl delete pod nginx-nvcnl
pod “nginx-nvcnl” deleted
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-4kjnj 0/1 Running 0 11s

Well, that is interesting. The replication controller (or short: RC) was instructed to always keep one copy of the container around, that’s the replicas=1 argument of the run command. Since we killed the running container (technically, the pod) the RC noticed that and immediately spun up another one. You can see this by two things: 1. the pod name changed (we killed nginx-nvcnl and when we listed the pods again after this we now see nginx-4kjnj instead), and 2. the age is 11 sec, so just recently launched.

Another thing we get for free with the run command and which is really a property of the implicitly created RC is that we can scale the whole thing. Say, one Web server is not enough for you, you want three copies of them running. Nothing is easier after you’ve executed the kubectl run command:

$ kubectl scale --replicas=3 rc nginx
replicationcontroller “nginx” scaled
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-4kjnj 1/1 Running 0 9m
nginx-ib0ou 1/1 Running 0 15s
nginx-zjwcr 1/1 Running 0 15s

The first pod (nginx-4kjnj) is apparently the original one, already running for some 9 min, while nginx-ib0ou and nginx-zjwcr are the two new pods.

Another way to re-trace what has happened is the events command. Use it like so, for example, to see the first four things Kubernetes reported it has done:

$ kubectl get events | head -5
FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT REASON SOURCE MESSAGE
53m 53m 1 nginx-4qyvp Pod implicitly required container POD Created {kubelet 10.0.3.254} Created with docker id 5cb84b4f2160
53m 53m 1 nginx-4qyvp Pod implicitly required container POD Started {kubelet 10.0.3.254} Started with docker id 5cb84b4f2160
53m 53m 1 nginx ReplicationController SuccessfulCreate {replication-controller } Created pod: nginx-4qyvp
53m 53m 1 nginx-4qyvp Pod Scheduled {scheduler } Successfully assigned nginx-4qyvp to 10.0.3.254

I hope you now appreciate the power of kubectl run a bit as well as all the cool stuff you get along with it, for no additional effort.

I often use the run command to do some testing or troubleshooting, while in more serious circumstances (where reproducibility, reusability and accountability matter) I typically use a manifest to define the RC. For example, in our case it would look something like the following (launched with kubectl create -f nginx.yaml, assuming the below snippet is stored in a file called nginx.yaml):

apiVersion: v1
kind: ReplicationController
metadata:
name: webserver-rc
spec:
replicas: 1
selector:
app: webserver
template:
metadata:
labels:
app: webserver
status: serving
spec:
containers:
- image: nginx:1.9.7
name: nginx
ports:
- containerPort: 80

Functionally it’s the same you get compared to the run command, but the manifest approach is better suited when working with a DVCS such as Git and can be easier re-used across teams and projects.