Kubernetes: Run A Pod Per Node With Daemon Sets

Jonathan Campos
Google Cloud - Community
5 min readAug 1, 2018

My initial title to this article was just “Daemon Sets” with the assumption that it would be enough to get the point across to anyone interested in reading. But quickly I thought back to when I first saw Daemon Sets in the Kubernetes documentation and remembered my own curiosity and also my own ambivalence to the topic. However, in some situations Daemon Sets will get you out of a very specific bind. Let’s explore more.

So by now we know already that Kubernetes is a mixture of Pods that run on Nodes. Easy enough. However there is no promise of which Node a pod will run or that there will be some uniform spacing of Pods over Nodes.

There is no promise of Nodes to Pods… until DaemonSets

If you have a situation where you need a Pod to be tied 1–1 with a Node (such as a Monitoring or Logging Pod) then how can you guarantee that there will be the necessary Pod to Node layout? Daemon Sets, that’s how. Daemon Sets create a Pod for each Node that is created. So if a new Node is spun up, so is the Daemon Set’s Pod to be run on that Node. If a Node is removed, so too is the Daemon Set’s Pods that belonged to that Node.

Let’s look at how to define a Daemon Set and see one in action.

If you haven’t gone through or even read the first part of this series you might be lost of have questions where this code is or what was done previously. Remember this assumes you’re using GCP and GKE.

Creating A Kubernetes Daemon Set

If you read through a few of my other posts (you should!), the following yaml file is going to seem very familiar to others. You’ll notice the .metadata section as you’ve seen before along with the .spec section. What makes a Daemon Set different is the .kind parameter. If set to Daemon Set then Kubernetes will automatically create the Pod described in the .spec.template under each Node in your Kubernetes Cluster.

apiVersion: apps/v1
kind: DaemonSet
# it is a daemonset
metadata:
name: daemonset-pods
# name of the daemon set
labels:
# any Pods with matching labels are included in this Daemon Set
app: kubernetes-series
tier: monitor

spec:
selector:

# Pods will match with the following labels
matchLabels:
name: daemonset-pods
template:

# Pod Template
metadata:
# Pod's labels
labels:
name: daemonset-pods
spec:

# the container(s) in this Pod
containers:
- name: daemon-container
image: gcr.io/PROJECT_NAME/daemon-container-daemon:latest

# environment variables for the Pod
env:
- name: GCLOUD_PROJECT
value: PROJECT_NAME
ports:
- containerPort: 80

Kubernetes Daemon Sets In Action

To test the DaemonSet we will start by creating our Kubernetes Cluster as we have in many previous articles. The following scripts, when run in your Google Cloud Shell will create a Kubernetes Cluster and deploy the Daemon Set yaml file to your Cluster.

$ git clone https://github.com/jonbcampos/kubernetes-series.git
$ cd ~/kubernetes-series/daemon/scripts
$ sh startup.sh
$ sh deploy.sh
$ sh check-endpoint.sh endpoints

It is important to note that this is a 3 Node Cluster in Kubernetes. This is important because as soon as you run the deploy script you can see in your GCP Kubernetes > Workloads view that you will have a Daemon Set Pod named daemonset-pods deployed, specifically 3 of these pods.

3 daemonset-pods deployed on 3 Nodes

Now I created a pretty crude script to scale the Cluster by X nodes. You can run that script as such:

$ cd ~/kubernetes-series/daemon/scripts
$ sh scale.sh 3 # scale up by 3 nodes

This script adds a node-pool named my-pool and tells the node-pool to add X (3 in this case) Nodes to the Cluster. If you go back to your Workflows view you see immediately that the new Nodes are spinning up… and so are Daemon Set’s Pods.

Nodes and Pods spinning up to 6

After a moment when everything is ready our Kubernetes Cluster is officially at 6 Nodes with 6 Daemon Set Pods and still the original 3 endpoints Pods.

6/6 Nodes To Pods

Conclusion

It is always a wonderful thing to see how easily Kubernetes handles a problem while giving you the flexibility to solve problems specific to your applications. Now you can see how to schedule Pods based on Nodes rather than the random Pod to Node assigning that is default to Kubernetes.

Teardown

Before you leave make sure to cleanup your project so you aren’t charged for the VMs that you’re using to run your cluster. Return to the Cloud Shell and run the teardown script to cleanup your project. This will delete your cluster and the containers that we’ve built.

$ cd ~/kubernetes-series/daemon/scripts
$ sh teardown.sh

Jonathan Campos is an avid developer and fan of learning new things. I believe that we should always keep learning and growing and failing. I am always a supporter of the development community and always willing to help. So if you have questions or comments on this story please ad them below. Connect with me on LinkedIn or Twitter and mention this story.

--

--

Jonathan Campos
Google Cloud - Community

Excited developer and lover of pizza. CTO at Alto. Google Developer Expert.