Deployments are how you run applications in production Kubernetes. They manage ReplicaSets, which manage Pods. Understanding Deployments means understanding rolling updates, rollbacks, scaling, and the entire lifecycle of your application.
The CKAD heavily tests Deployment operations:
Create and scale Deployments
Perform rolling updates
Rollback to previous versions
Pause and resume rollouts
Check rollout status and history
The Software Release Pipeline Analogy
A Deployment is like a release manager. When you want to ship a new version, the release manager (Deployment) creates a new production line (ReplicaSet) running the new code. It gradually moves traffic from the old line to the new one. If something goes wrong, it can quickly switch back to the old line. The workers (Pods) just follow instructions—the Deployment orchestrates everything.
maxSurge: 1# Max pods over desired count during update
maxUnavailable: 0# Max pods unavailable during update
Setting
Description
Example
maxSurge
Extra pods allowed during update
1 or 25%
maxUnavailable
Pods that can be down during update
0 or 25%
StatefulSet MaxUnavailable
StatefulSets support maxUnavailable in their updateStrategy (GA since K8s 1.27), enabling parallel pod updates instead of sequential one-at-a-time. This can make StatefulSet updates up to 60% faster — critical for database clusters and stateful workloads:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 2# Update 2 pods at a time instead of 1
Pause and predict: With maxSurge: 1 and maxUnavailable: 0 on a 4-replica Deployment, what’s the maximum number of pods that can exist during an update? What about the minimum number of available pods? Work it out before checking the strategy types below.
Stop and think: You need to change both the image AND the resource limits of a Deployment simultaneously. If you run two separate kubectl set commands, Kubernetes triggers two separate rollouts. How can you batch these into a single rollout?
What would happen if: You run kubectl set image deploy/web-app nginx=nginx:nonexistent-tag. The rollout starts but new pods can’t pull the image. Does Kubernetes automatically roll back, or does it just stall? What’s the safest recovery action?
kubectl rollout restart triggers a rolling restart without changing the image. It adds an annotation with the current timestamp, causing pods to recreate. Great for picking up ConfigMap changes.
Deployments don’t delete old ReplicaSets immediately. They keep them (scaled to 0) for rollback capability. Control this with revisionHistoryLimit.
The --record flag is deprecated but still works. Kubernetes 1.22+ recommends using annotations instead to track change causes.
After a deployment update, your application is throwing errors. You check kubectl rollout history deploy/api-server and see revisions 1 through 4. Revision 2 was the last known-good version. How do you roll back specifically to revision 2, and what happens to the revision numbering afterward?
Answer
Run `kubectl rollout undo deploy/api-server --to-revision=2`. This doesn't "go back in time" -- it creates a NEW revision (revision 5) that uses the same pod template as revision 2. The old revision 2 disappears from history (since its ReplicaSet is now the active one again). This is important to understand: rollbacks don't rewrite history, they create new revisions. Always run `kubectl rollout status deploy/api-server` afterward to confirm the rollback completed successfully.
Your team runs a legacy application that writes to a local SQLite database file. Two versions of the app can’t access the database simultaneously without corruption. Which deployment strategy should you use, and what’s the trade-off?
Answer
Use `strategy: type: Recreate`. This kills all existing pods before creating new ones, ensuring only one version accesses the database at a time. The trade-off is downtime -- there's a gap between old pods terminating and new pods becoming ready. `RollingUpdate` would cause data corruption since both old and new pods would access SQLite concurrently. For production, consider migrating to a proper database (PostgreSQL, MySQL) that handles concurrent access, which lets you use `RollingUpdate` for zero-downtime deployments.
You updated a ConfigMap that your Deployment’s pods consume as environment variables. Running kubectl get pods shows all pods are still running the old config. kubectl rollout restart isn’t working because your cluster RBAC restricts that command. What alternative approach triggers a pod recreation?
Answer
Use `kubectl set env deploy/NAME RESTART_TRIGGER=$(date +%s)` to add a dummy environment variable with a timestamp. Any change to the pod template triggers a rolling update, which recreates all pods with the fresh ConfigMap values. Alternatively, you could patch the deployment to add an annotation: `kubectl patch deploy NAME -p '{"spec":{"template":{"metadata":{"annotations":{"restart":"'$(date +%s)'"}}}}}'`. Both approaches force pod recreation without the `rollout restart` command.
You create a Deployment with replicas: 5, maxSurge: 2, and maxUnavailable: 0. During a rolling update, you notice the rollout stalls — new pods are stuck in Pending because the cluster has no capacity for extra pods. What went wrong with your strategy configuration?
Answer
With `maxSurge: 2` and `maxUnavailable: 0`, Kubernetes must create 2 extra pods (total 7) before it can terminate any old ones. If the cluster can't schedule 7 pods, the rollout deadlocks. The fix is either: (1) set `maxUnavailable: 1` so Kubernetes can remove an old pod first to make room; (2) reduce `maxSurge` to 1; or (3) ensure the cluster has capacity for `replicas + maxSurge` pods. The `maxUnavailable: 0` + insufficient capacity combination is a common rollout trap. Setting `progressDeadlineSeconds` helps detect this automatically.