Skip to content

Module 2.3: DaemonSets & StatefulSets

Hands-On Lab Available
K8s Cluster intermediate 40 min
Launch Lab ↗

Opens in Killercoda in a new tab

Complexity: [MEDIUM] - Specialized workload patterns

Time to Complete: 40-50 minutes

Prerequisites: Module 2.1 (Pods), Module 2.2 (Deployments)


After this module, you will be able to:

  • Deploy DaemonSets for node-level services and StatefulSets for stateful applications
  • Explain how StatefulSet pod naming, PVC binding, and ordered deployment differ from Deployments
  • Configure DaemonSet tolerations to run on control plane nodes when needed
  • Troubleshoot StatefulSet issues (stuck PVC binding, ordered rollout failures, headless service DNS)

Deployments work great for stateless applications, but not everything is stateless. Some workloads have special requirements:

  • DaemonSets: When you need exactly one pod per node (logging, monitoring, network plugins)
  • StatefulSets: When pods need stable identities and persistent storage (databases, distributed systems)

The CKA exam tests your understanding of when to use each controller and how to troubleshoot them. Knowing the right tool for the job is a key admin skill.

The Specialist Teams Analogy

Think of your cluster as a hospital. Deployments are like general practitioners—you can have any number, they’re interchangeable, and patients don’t care which one they see. DaemonSets are like security guards—you need exactly one per entrance (node), no more, no less. StatefulSets are like surgeons—each has a unique identity, their own dedicated tools (storage), and patients specifically request “Dr. Smith” (stable network identity).


By the end of this module, you’ll be able to:

  • Create and manage DaemonSets
  • Understand when to use DaemonSets vs Deployments
  • Create and manage StatefulSets
  • Understand stable network identity and storage
  • Troubleshoot DaemonSet and StatefulSet issues

A DaemonSet ensures that all (or some) nodes run a copy of a pod.

┌────────────────────────────────────────────────────────────────┐
│ DaemonSet │
│ │
│ Node 1 Node 2 Node 3 │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ ┌─────────┐ │ │ ┌─────────┐ │ │ ┌─────────┐ │ │
│ │ │ DS Pod │ │ │ │ DS Pod │ │ │ │ DS Pod │ │ │
│ │ │(fluentd)│ │ │ │(fluentd)│ │ │ │(fluentd)│ │ │
│ │ └─────────┘ │ │ └─────────┘ │ │ └─────────┘ │ │
│ │ │ │ │ │ │ │
│ │ [App Pods] │ │ [App Pods] │ │ [App Pods] │ │
│ │ │ │ │ │ │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
│ When Node 4 joins → DaemonSet automatically creates pod │
│ When Node 2 leaves → Pod is terminated │
│ │
└────────────────────────────────────────────────────────────────┘
Use CaseExample
Log collectionFluentd, Filebeat
Node monitoringNode Exporter, Datadog agent
Network pluginsCalico, Cilium, Weave
Storage daemonsGlusterFS, Ceph
Security agentsFalco, Sysdig
fluentd-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
containers:
- name: fluentd
image: fluentd:v1.16
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
hostPath:
path: /var/log
Terminal window
kubectl apply -f fluentd-daemonset.yaml

Pause and predict: You have a 5-node cluster and create a DaemonSet. Then a 6th node joins the cluster. What happens automatically? Now imagine you do the same with a Deployment set to 5 replicas — what happens when the 6th node joins?

AspectDaemonSetDeployment
Pod countOne per node (automatic)Specified replicas
SchedulingBypasses schedulerUses scheduler
Node additionAuto-creates podNo automatic action
Use caseNode-level servicesApplication workloads
Terminal window
# List DaemonSets
kubectl get daemonsets
kubectl get ds # Short form
# Describe DaemonSet
kubectl describe ds fluentd
# Check pods created by DaemonSet
kubectl get pods -l app=fluentd -o wide
# Delete DaemonSet
kubectl delete ds fluentd

Did You Know?

DaemonSets ignore most scheduling constraints by default. They even run on control plane nodes if there are no taints preventing it. Use nodeSelector or tolerations to control placement.


Use nodeSelector to run only on certain nodes:

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ssd-monitor
spec:
selector:
matchLabels:
app: ssd-monitor
template:
metadata:
labels:
app: ssd-monitor
spec:
nodeSelector:
disk: ssd # Only nodes with this label
containers:
- name: monitor
image: busybox
command: ["sleep", "infinity"]
Terminal window
# Label a node
kubectl label node worker-1 disk=ssd
# DaemonSet only runs on labeled nodes
kubectl get pods -l app=ssd-monitor -o wide

DaemonSets often need to run on tainted nodes:

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-monitor
spec:
selector:
matchLabels:
app: node-monitor
template:
metadata:
labels:
app: node-monitor
spec:
tolerations:
# Tolerate control-plane taint
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
# Tolerate all taints (run everywhere)
- operator: Exists
containers:
- name: monitor
image: prom/node-exporter
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
spec:
updateStrategy:
type: RollingUpdate # Default
rollingUpdate:
maxUnavailable: 1 # Update one node at a time
selector:
matchLabels:
app: fluentd
template:
# ...
StrategyBehavior
RollingUpdateGradually update pods, one node at a time
OnDeleteOnly update when pod is manually deleted

StatefulSets manage stateful applications with:

  • Stable, unique network identifiers
  • Stable, persistent storage
  • Ordered, graceful deployment and scaling
┌────────────────────────────────────────────────────────────────┐
│ StatefulSet │
│ │
│ Unlike Deployments, pods have stable identities: │
│ │
│ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐ │
│ │ web-0 │ │ web-1 │ │ web-2 │ │
│ │ (always 0) │ │ (always 1) │ │ (always 2) │ │
│ │ │ │ │ │ │ │
│ │ PVC: data-0 │ │ PVC: data-1 │ │ PVC: data-2 │ │
│ │ DNS: web-0... │ │ DNS: web-1... │ │ DNS: web-2... │ │
│ └───────────────┘ └───────────────┘ └───────────────┘ │
│ │
│ If web-1 dies and restarts: │
│ - Still named web-1 (not web-3) │
│ - Reattaches to PVC data-1 │
│ - Same DNS name: web-1.nginx.default.svc.cluster.local │
│ │
└────────────────────────────────────────────────────────────────┘
Use CaseExample
DatabasesPostgreSQL, MySQL, MongoDB
Distributed systemsKafka, Zookeeper, etcd
Search enginesElasticsearch
Message queuesRabbitMQ

Pause and predict: If you delete pod web-1 from a StatefulSet, what name will the replacement pod get — web-1 or web-3? What happens to the PVC that was bound to web-1?

StatefulSets require a Headless Service for network identity:

# Headless Service (required)
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None # This makes it headless
selector:
app: nginx
---
# StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: nginx # Must reference the headless service
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumeClaimTemplates: # Creates PVC for each pod
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
Terminal window
# Pod DNS names follow pattern:
# <pod-name>.<service-name>.<namespace>.svc.cluster.local
# For StatefulSet "web" with headless service "nginx":
web-0.nginx.default.svc.cluster.local
web-1.nginx.default.svc.cluster.local
web-2.nginx.default.svc.cluster.local
# Other pods can reach specific instances:
curl web-0.nginx
curl web-1.nginx
Terminal window
# Each pod gets its own PVC named:
# <volumeClaimTemplates.name>-<pod-name>
data-web-0
data-web-1
data-web-2
# When pod restarts, it reattaches to its specific PVC
# Data persists across pod restarts

Did You Know?

When you delete a StatefulSet, the PVCs are NOT automatically deleted. This is a safety feature—you keep your data. To clean up, manually delete the PVCs after deleting the StatefulSet.


Scaling Up (0 → 3):
web-0 created and ready → web-1 created and ready → web-2 created
Scaling Down (3 → 1):
web-2 terminated → web-1 terminated → web-0 remains
Each pod waits for previous to be Running and Ready
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
podManagementPolicy: OrderedReady # Default - sequential
# podManagementPolicy: Parallel # All at once (like Deployment)
PolicyBehavior
OrderedReadySequential creation/deletion (default)
ParallelAll pods created/deleted simultaneously

Stop and think: You’re running a 3-replica StatefulSet for a database cluster. You want to test a new version on just one replica before rolling it out to all. How would you use the partition field to achieve a canary deployment? Which pod gets updated first — web-0 or web-2?

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 2 # Only update pods >= 2

Partition enables canary deployments:

  • With partition: 2, only web-2 gets updated
  • web-0 and web-1 keep the old version
  • Useful for testing updates on subset of pods
Terminal window
# List StatefulSets
kubectl get statefulsets
kubectl get sts # Short form
# Describe
kubectl describe sts web
# Scale
kubectl scale sts web --replicas=5
# Check pods (notice ordered names)
kubectl get pods -l app=nginx
# Check PVCs (one per pod)
kubectl get pvc
# Delete StatefulSet (PVCs remain!)
kubectl delete sts web
# Delete PVCs manually
kubectl delete pvc data-web-0 data-web-1 data-web-2

AspectDeploymentStatefulSet
Pod namesRandom suffix (nginx-5d5dd5d5fb-xyz)Ordinal index (web-0, web-1)
Network identityNone (use Service)Stable DNS per pod
StorageShared or noneDedicated PVC per pod
Scaling orderAny orderSequential (ordered)
Rolling updateRandom orderReverse order (N-1 first)
Use caseStateless appsStateful apps
┌────────────────────────────────────────────────────────────────┐
│ Choosing the Right Controller │
│ │
│ Does each pod need unique identity? │
│ │ │
│ ├── No ──► Does each node need one pod? │
│ │ │ │
│ │ ├── Yes ──► DaemonSet │
│ │ │ │
│ │ └── No ──► Deployment │
│ │ │
│ └── Yes ──► Does it need persistent storage? │
│ │ │
│ └── Yes/No ──► StatefulSet │
│ │
└────────────────────────────────────────────────────────────────┘

War Story: The Database Disaster

A team deployed PostgreSQL using a Deployment with a PVC. It worked—until the pod was rescheduled. The new pod got a different IP, replication broke, and the standby couldn’t find the primary. Switching to a StatefulSet with stable network identity fixed everything. Use the right tool!


A Service with clusterIP: None. Instead of load balancing, DNS returns individual pod IPs.

# Regular Service
apiVersion: v1
kind: Service
metadata:
name: nginx-regular
spec:
selector:
app: nginx
ports:
- port: 80
# DNS: nginx-regular → ClusterIP (load balanced)
---
# Headless Service
apiVersion: v1
kind: Service
metadata:
name: nginx-headless
spec:
clusterIP: None # Headless!
selector:
app: nginx
ports:
- port: 80
# DNS: nginx-headless → Returns all pod IPs
# DNS: web-0.nginx-headless → Specific pod IP
Terminal window
# Regular service - returns ClusterIP
nslookup nginx-regular
# Server: 10.96.0.10
# Address: 10.96.0.10#53
# Name: nginx-regular.default.svc.cluster.local
# Address: 10.96.100.50 (ClusterIP)
# Headless service - returns pod IPs
nslookup nginx-headless
# Server: 10.96.0.10
# Address: 10.96.0.10#53
# Name: nginx-headless.default.svc.cluster.local
# Address: 10.244.1.5 (Pod IP)
# Address: 10.244.2.6 (Pod IP)
# Address: 10.244.3.7 (Pod IP)

MistakeProblemSolution
StatefulSet without headless ServicePods don’t get stable DNS namesCreate headless Service with matching selector
Deleting StatefulSet expecting PVC cleanupData remains, storage quota consumedManually delete PVCs if data not needed
Using Deployment for databasesNo stable identity, storage issuesUse StatefulSet for stateful workloads
DaemonSet on all nodes unexpectedlyRuns on control plane tooAdd appropriate tolerations/nodeSelector
Wrong serviceName in StatefulSetDNS resolution failsEnsure serviceName matches headless Service name

  1. Your monitoring team needs exactly one log collector pod on every node, including nodes added later. A colleague suggests using a Deployment with replicas set to the node count and pod anti-affinity. Why would a DaemonSet be a better choice, and what happens when a new node joins the cluster?

    Answer A DaemonSet is better because it automatically creates a pod on every new node that joins the cluster and removes pods from nodes that leave. With a Deployment and anti-affinity, you'd need to manually increase the replica count each time a node is added, and the anti-affinity only *prefers* spreading -- it doesn't guarantee one-per-node. Additionally, DaemonSets can tolerate taints that normal Deployments cannot, ensuring coverage on special-purpose nodes like GPU nodes or control plane nodes.
  2. You’re deploying a 3-node PostgreSQL cluster with primary-standby replication. The standby nodes need to connect to the primary by a stable DNS name, and each node needs its own persistent volume that survives pod restarts. Which controller do you use, and what additional resource is required? What happens if web-1 (a standby) crashes?

    Answer Use a StatefulSet with a headless Service (`clusterIP: None`). The headless Service is required because it provides stable DNS names like `web-0.postgres.default.svc.cluster.local` for each pod. The `volumeClaimTemplates` field ensures each pod gets its own PVC (e.g., `data-web-0`, `data-web-1`). When `web-1` crashes, the StatefulSet controller recreates it with the exact same name `web-1` (not `web-3`), and it reattaches to its original PVC `data-web-1`, preserving all data. The standby configuration pointing to `web-0.postgres` continues to work because the DNS name is stable.
  3. You deleted a StatefulSet with kubectl delete sts web, but your storage costs haven’t decreased. A colleague says the data should have been cleaned up automatically. What actually happened, and what must you do to reclaim the storage?

    Answer PVCs created by a StatefulSet's `volumeClaimTemplates` are NOT automatically deleted when the StatefulSet is deleted. This is an intentional safety feature to prevent accidental data loss -- database data is precious. The PVCs (e.g., `data-web-0`, `data-web-1`, `data-web-2`) still exist and are bound to their PersistentVolumes, consuming storage. You must manually delete them with `kubectl delete pvc data-web-0 data-web-1 data-web-2`. Always audit PVCs after deleting StatefulSets to avoid ongoing storage costs.
  4. You need to scale a StatefulSet from 3 replicas to 5. In what order are the new pods created? Then you scale back down to 2. In what order are pods terminated, and why does this ordering matter for distributed databases?

    Answer Scaling up: `web-3` is created first and must become Running and Ready before `web-4` is created. Scaling down: `web-4` is terminated first, then `web-3`, then `web-2`. This reverse-ordinal ordering matters for distributed databases because higher-numbered replicas are typically the newest members of the cluster. Removing them first ensures the most established members (which may hold leadership roles or have the most data) are the last to be removed. For example, in a database cluster, `web-0` is often the primary, and removing it last prevents unnecessary leader elections during scale-down.

Task: Create a DaemonSet and StatefulSet, understand their behaviors.

Steps:

  1. Create a DaemonSet:
Terminal window
cat > node-monitor-ds.yaml << 'EOF'
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-monitor
spec:
selector:
matchLabels:
app: node-monitor
template:
metadata:
labels:
app: node-monitor
spec:
containers:
- name: monitor
image: busybox
command: ["sh", "-c", "while true; do echo $(hostname); sleep 60; done"]
resources:
limits:
memory: 50Mi
cpu: 50m
EOF
kubectl apply -f node-monitor-ds.yaml
  1. Verify one pod per node:
Terminal window
kubectl get pods -l app=node-monitor -o wide
kubectl get ds node-monitor
# DESIRED = CURRENT = READY = number of nodes
  1. Check logs from a specific node’s pod:
Terminal window
kubectl logs -l app=node-monitor --all-containers
  1. Cleanup DaemonSet:
Terminal window
kubectl delete ds node-monitor
rm node-monitor-ds.yaml
  1. Create headless Service and StatefulSet:
Terminal window
cat > statefulset-demo.yaml << 'EOF'
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
clusterIP: None
selector:
app: nginx
ports:
- port: 80
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: nginx
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
EOF
kubectl apply -f statefulset-demo.yaml
  1. Watch ordered creation:
Terminal window
kubectl get pods -l app=nginx -w
# web-0 Running, then web-1, then web-2
  1. Verify stable network identity:
Terminal window
# Create a test pod
kubectl run dns-test --image=busybox --rm -it --restart=Never -- nslookup web-0.nginx
kubectl run dns-test --image=busybox --rm -it --restart=Never -- nslookup web-1.nginx
  1. Scale down and observe order:
Terminal window
kubectl scale sts web --replicas=1
kubectl get pods -l app=nginx -w
# web-2 terminates, then web-1
  1. Scale back up:
Terminal window
kubectl scale sts web --replicas=3
kubectl get pods -l app=nginx -w
# web-1 created, then web-2
  1. Cleanup:
Terminal window
kubectl delete -f statefulset-demo.yaml
rm statefulset-demo.yaml

Success Criteria:

  • Can create DaemonSets
  • Understand one pod per node behavior
  • Can create StatefulSets with headless Services
  • Understand ordered scaling
  • Know when to use each controller

Drill 1: DaemonSet Creation (Target: 3 minutes)

Section titled “Drill 1: DaemonSet Creation (Target: 3 minutes)”
Terminal window
# Create DaemonSet
cat << 'EOF' | kubectl apply -f -
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: log-collector
spec:
selector:
matchLabels:
app: log-collector
template:
metadata:
labels:
app: log-collector
spec:
containers:
- name: collector
image: busybox
command: ["sleep", "infinity"]
EOF
# Verify
kubectl get ds log-collector
kubectl get pods -l app=log-collector -o wide
# Cleanup
kubectl delete ds log-collector

Drill 2: DaemonSet with nodeSelector (Target: 5 minutes)

Section titled “Drill 2: DaemonSet with nodeSelector (Target: 5 minutes)”
Terminal window
# Label one node
NODE=$(kubectl get nodes -o jsonpath='{.items[0].metadata.name}')
kubectl label node $NODE disk=ssd
# Create DaemonSet with nodeSelector
cat << 'EOF' | kubectl apply -f -
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ssd-only
spec:
selector:
matchLabels:
app: ssd-only
template:
metadata:
labels:
app: ssd-only
spec:
nodeSelector:
disk: ssd
containers:
- name: app
image: busybox
command: ["sleep", "infinity"]
EOF
# Verify - should only run on labeled node
kubectl get pods -l app=ssd-only -o wide
# Cleanup
kubectl delete ds ssd-only
kubectl label node $NODE disk-

Drill 3: StatefulSet Basic (Target: 5 minutes)

Section titled “Drill 3: StatefulSet Basic (Target: 5 minutes)”
Terminal window
# Create headless service and StatefulSet
cat << 'EOF' | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: db
spec:
clusterIP: None
selector:
app: db
ports:
- port: 5432
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: db
spec:
serviceName: db
replicas: 3
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- name: postgres
image: busybox
command: ["sleep", "infinity"]
EOF
# Watch ordered creation
kubectl get pods -l app=db -w &
sleep 30
kill %1
# Verify names
kubectl get pods -l app=db
# Cleanup
kubectl delete sts db
kubectl delete svc db

Drill 4: StatefulSet DNS Test (Target: 5 minutes)

Section titled “Drill 4: StatefulSet DNS Test (Target: 5 minutes)”
Terminal window
# Create StatefulSet with headless service
cat << 'EOF' | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
clusterIP: None
selector:
app: nginx
ports:
- port: 80
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: nginx
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
EOF
# Wait for ready
kubectl wait --for=condition=ready pod/web-0 pod/web-1 --timeout=60s
# Test DNS resolution
kubectl run dns-test --image=busybox --rm -it --restart=Never -- nslookup nginx
kubectl run dns-test --image=busybox --rm -it --restart=Never -- nslookup web-0.nginx
kubectl run dns-test --image=busybox --rm -it --restart=Never -- nslookup web-1.nginx
# Cleanup
kubectl delete sts web
kubectl delete svc nginx

Drill 5: StatefulSet Scaling Order (Target: 3 minutes)

Section titled “Drill 5: StatefulSet Scaling Order (Target: 3 minutes)”
Terminal window
# Create StatefulSet
cat << 'EOF' | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: order-test
spec:
clusterIP: None
selector:
app: order-test
ports:
- port: 80
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: order
spec:
serviceName: order-test
replicas: 1
selector:
matchLabels:
app: order-test
template:
metadata:
labels:
app: order-test
spec:
containers:
- name: nginx
image: nginx
EOF
# Scale up and watch order
kubectl scale sts order --replicas=3
kubectl get pods -l app=order-test -w &
sleep 30
kill %1
# Scale down and watch reverse order
kubectl scale sts order --replicas=1
kubectl get pods -l app=order-test -w &
sleep 30
kill %1
# Cleanup
kubectl delete sts order
kubectl delete svc order-test

Drill 6: Troubleshooting - DaemonSet Not Running on Node

Section titled “Drill 6: Troubleshooting - DaemonSet Not Running on Node”
Terminal window
# Taint a node
NODE=$(kubectl get nodes -o jsonpath='{.items[0].metadata.name}')
kubectl taint node $NODE special=true:NoSchedule
# Create DaemonSet without toleration
cat << 'EOF' | kubectl apply -f -
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: no-toleration
spec:
selector:
matchLabels:
app: no-toleration
template:
metadata:
labels:
app: no-toleration
spec:
containers:
- name: app
image: busybox
command: ["sleep", "infinity"]
EOF
# Check - won't run on tainted node
kubectl get pods -l app=no-toleration -o wide
kubectl get ds no-toleration
# YOUR TASK: Fix by adding toleration
# (Delete and recreate with toleration)
# Cleanup
kubectl delete ds no-toleration
kubectl taint node $NODE special-
Solution
Terminal window
cat << 'EOF' | kubectl apply -f -
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: with-toleration
spec:
selector:
matchLabels:
app: with-toleration
template:
metadata:
labels:
app: with-toleration
spec:
tolerations:
- key: special
operator: Equal
value: "true"
effect: NoSchedule
containers:
- name: app
image: busybox
command: ["sleep", "infinity"]
EOF
kubectl get pods -l app=with-toleration -o wide
kubectl delete ds with-toleration

Drill 7: Challenge - Identify the Right Controller

Section titled “Drill 7: Challenge - Identify the Right Controller”

For each scenario, identify whether to use Deployment, DaemonSet, or StatefulSet:

  1. Web application with 5 replicas
  2. Log collector on every node
  3. PostgreSQL database cluster
  4. REST API service
  5. Prometheus node exporter
  6. Kafka cluster
  7. nginx reverse proxy
Answers
  1. Deployment - Stateless web app
  2. DaemonSet - Need one per node
  3. StatefulSet - Needs stable identity and storage
  4. Deployment - Stateless REST API
  5. DaemonSet - Monitoring agent per node
  6. StatefulSet - Distributed system with stable identity
  7. Deployment - Stateless proxy (unless specific instance needed)

Module 2.4: Jobs & CronJobs - Batch workloads and scheduled tasks.