image_pdfimage_print

In this blog post, we will discuss kubernetes DaemonSet. To understand the topic of this post, i assume you have a basic understanding of kubernetes, Kubectl and Pods. Also to follow along with the demo, i assume you have a k8s cluster with multiple nodes. We will go over what a DaemonSet is used for, how to create one and lastly show how to work with it with a simple example.

What is a DaemonSet?

A DaemonSet make sure that all or some kubernetes Nodes run a copy of a Pod. When a new node is added to the cluster, a Pod is added to it to match the rest of the nodes and when a node is removed from the cluster, the Pod is garbage collected. Deleting a DaemonSet will clean up the Pods it created. In simple terms, this is what a DaemonSet is. As the name implies, it allow us to run a daemon on every node.

Why use DaemonSet?

Ok now that we know what it is, what are some use cases and why would you need it.

  • running a cluster storage daemon, such as glusterd, ceph, on each node.
  • running a logs collection daemon on every node, such as fluentd or logstash.
  • running a node monitoring daemon on every node, such as Prometheus Node Exporter, collectd, Datadog agent etc.

This list above can be expanded to so many other use cases like a more complex set up where we use multiple DaemonSets for a single type of daemon, but with different flags and/or different memory and cpu requests for different hardware types.

How are DaemonSet scheduled?

DaemonSet are scheduled either with DaemonSet controller or Default scheduler.

  • DaemonSet controller Pods created have the machine already selected when .spec.nodeName is specified during Pod creation. With this kind of scheduler, the unschedulable field of a node is not respected and it can create Pods even when the scheduler has not been started, which can help cluster bootstrap. By default, this kind of controller is disabled in k8s v1.12+.
  • Default scheduler ScheduleDaemonSetPods allows you to schedule DaemonSets using the default kind instead of the DaemonSet controller. This is done by adding the NodeAffinity term to the DaemonSet pods, instead of the .spec.nodeName term. Default scheduler will replace the node affinity of the DaemonSet pod if they already exists.

How to work with DaemonSet?

Just like any other manifest in kubernetes, apiVersion, kind, and metadata fields are required. To show the other fields in the manifest, we will deploy an example of fluentd-elasticsearch image that we want running on every node. The idea is that we want to have a daemon of this on every node collecting logs for us and sending it to ES.

demo.yaml

apiVersion: apps/v1 #required fields
kind: DaemonSet #required fields
metadata: #required fields
name: fluentd-es-demo
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-es #this must match the label below
template: #required fields 
metadata:
labels:
name: fluentd-es #this must match the selector above
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd-es-example
image: k8s.gcr.io/fluentd-elasticsearch:1.20
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers

There are certain things to keep in mind when using DaemonSets:

  • When a Daemonset is created the .spec.selector can not be changed, if you do, it will break things.
  • You must specify a pod selector that matches the labels of the .spec.template.
  • You should not normally create any Pods whose labels match this selector, either directly, via another DaemonSet, or via other controller such as ReplicaSet. Otherwise, the DaemonSet controller will think that those Pods were created by it.

It is worth noting that you can deploy a DaemonSet to run only on some nodes and not all of them if you specify a .spec.template.spec.nodeSelector. It will be deployed to any node that matches the selector.

Now let’s run “kubectl create -f demo.yaml” to deploy the example.

$ kubectl create -f demo.yaml 
daemonset.apps "fluentd-es-demo" created

Let’s make sure it is running

$ kubectl get daemonset
NAME              DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
fluentd-es-demo   3         3         3         3            3           <none>          59s

First let us see how many nodes we have by running “kubectl get nodes” to see the identity of our nodes.

$ kubectl get node
NAME                 STATUS    ROLES     AGE       VERSION
node2                Ready     <none>    92d       v1.10.3
node1                Ready     <none>    92d       v1.10.3
node3                Ready     <none>    92d       v1.10.3

Now to confirm, we want to make sure we have all pods running and also to make sure they are running on every node.

$ kubectl get pod -o wide
NAME                             READY     STATUS    RESTARTS   AGE       IP           NODE
fluentd-es-demo-bfpf9            1/1       Running   0          1m           10.0.0.3            node3
fluentd-es-demo-h4w85            1/1       Running   0          1m         10.0.0.1            node1
fluentd-es-demo-xm2rl            1/1       Running   0          1m          10.0.0.2           node2

We can see that not only is our fluentd-es-demo pods running but there is a copy of each on every node. To delete the DaemonSet, simple run “kubectl delete fluentd-es-demo”. The delete command will delete the daemonset with the pods associated with it. If you want to delete daemonset without deleting the pods, add the flag –cascade=false with kubectl, then the Pods will be left on the nodes.

Kubernetes On-Demand Webinar

Kubernetes (K8S), containers, microservices… what’s missing? Application Workflows! Watch this On-Demand Webinar to learn about K8S JOB and DaemonSet objects and much more!
Watch Now ›
Last updated: 01/14/2019

These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

See an error or have a suggestion? Please let us know by emailing blogs@bmc.com.

About the author

Toye Idowu

Toye Idowu

Olatoye is a Certified Kubernetes Administrator and experienced DevOps/Platform engineering Consultant with a demonstrated history of working in the Information Technology and Services industry.