Skip to content

Module 2.2: Deployments & ReplicaSets

Hands-On Lab Available
K8s Cluster intermediate 45 min
Launch Lab ↗

Opens in Killercoda in a new tab

Complexity: [MEDIUM] - Core exam topic

Time to Complete: 45-55 minutes

Prerequisites: Module 2.1 (Pods)


After this module, you will be able to:

  • Create Deployments with rolling update strategy and configure maxSurge/maxUnavailable
  • Perform rollouts, rollbacks, and history inspection under CKA time pressure
  • Diagnose a stuck rollout by checking ReplicaSet status, pod events, and resource availability
  • Explain the Deployment → ReplicaSet → Pod ownership chain and why old ReplicaSets are retained

In production, you never run standalone pods. You use Deployments.

Deployments are the most common workload resource. They handle:

  • Running multiple replicas of your app
  • Rolling updates with zero downtime
  • Automatic rollbacks when things go wrong
  • Scaling up and down

The CKA exam tests creating deployments, performing rolling updates, scaling, and rollbacks. These are fundamental skills you’ll use daily.

The Fleet Manager Analogy

Think of a Deployment like a fleet manager for a taxi company. The manager doesn’t drive taxis directly—they manage drivers (pods). If a driver calls in sick (pod crashes), the manager assigns a replacement. If demand increases (scale up), the manager hires more drivers. During a vehicle upgrade (rolling update), the manager swaps old cars for new ones gradually, ensuring customers always have rides available.


By the end of this module, you’ll be able to:

  • Create and manage Deployments
  • Understand how ReplicaSets work
  • Perform rolling updates and rollbacks
  • Scale applications horizontally
  • Pause and resume deployments

┌────────────────────────────────────────────────────────────────┐
│ Deployment Hierarchy │
│ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Deployment │ │
│ │ - Desired state (replicas, image, strategy) │ │
│ │ - Manages ReplicaSets │ │
│ └────────────────────────┬────────────────────────────────┘ │
│ │ creates & manages │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ ReplicaSet │ │
│ │ - Ensures N replicas running │ │
│ │ - Creates/deletes pods to match desired count │ │
│ └────────────────────────┬────────────────────────────────┘ │
│ │ creates & manages │
│ ▼ │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ Pod 1 │ │ Pod 2 │ │ Pod 3 │ │ Pod N │ │
│ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │
│ │
└────────────────────────────────────────────────────────────────┘
FeatureReplicaSetDeployment
Maintain replica count
Rolling updates
Rollback
Update history
Pause/Resume

Rule: Always use Deployments. Never create ReplicaSets directly.

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3 # Desired pod count
selector: # How to find pods to manage
matchLabels:
app: nginx
template: # Pod template
metadata:
labels:
app: nginx # Must match selector
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80

Critical: The spec.selector.matchLabels must match spec.template.metadata.labels. If they don’t match, the Deployment won’t manage the pods.


Terminal window
# Create deployment
kubectl create deployment nginx --image=nginx
# Create with specific replicas
kubectl create deployment nginx --image=nginx --replicas=3
# Create with port
kubectl create deployment nginx --image=nginx --port=80
# Generate YAML (essential for exam!)
kubectl create deployment nginx --image=nginx --replicas=3 --dry-run=client -o yaml > deploy.yaml
nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
Terminal window
kubectl apply -f nginx-deployment.yaml
Terminal window
# List deployments
kubectl get deployments
kubectl get deploy # Short form
# Detailed view
kubectl get deploy -o wide
# Describe deployment
kubectl describe deployment nginx
# Get deployment YAML
kubectl get deployment nginx -o yaml
# Check rollout status
kubectl rollout status deployment/nginx

Did You Know?

The kubectl rollout status command blocks until the rollout completes. It’s perfect for CI/CD pipelines—if the rollout fails, the command exits with a non-zero status.


When you create a Deployment:

  1. Deployment controller creates a ReplicaSet
  2. ReplicaSet controller creates pods
  3. ReplicaSet ensures desired replicas match actual
Terminal window
# Create a deployment
kubectl create deployment nginx --image=nginx --replicas=3
# See the ReplicaSet created
kubectl get replicasets
# NAME DESIRED CURRENT READY AGE
# nginx-5d5dd5d5fb 3 3 3 30s
# See pods with owner reference
kubectl get pods --show-labels

Pause and predict: After updating a Deployment’s image twice (v1 -> v2 -> v3), how many ReplicaSets will exist? What are their replica counts? Why does Kubernetes keep the old ones?

nginx-5d5dd5d5fb
^ ^
| |
| └── Hash of pod template
|
└── Deployment name

When you update the deployment, a new ReplicaSet is created with a different hash.

Terminal window
# Don't do this - let Deployment manage ReplicaSets
kubectl scale replicaset nginx-5d5dd5d5fb --replicas=5 # BAD
# Do this instead
kubectl scale deployment nginx --replicas=5 # GOOD

Terminal window
# Scale to specific replicas
kubectl scale deployment nginx --replicas=5
# Scale to zero (stop all pods)
kubectl scale deployment nginx --replicas=0
# Scale multiple deployments
kubectl scale deployment nginx webapp --replicas=3
Terminal window
# Edit deployment directly
kubectl edit deployment nginx
# Change spec.replicas and save
# Or patch
kubectl patch deployment nginx -p '{"spec":{"replicas":5}}'
Terminal window
# Watch pods scale
kubectl get pods -w
# Check deployment status
kubectl get deployment nginx
# NAME READY UP-TO-DATE AVAILABLE AGE
# nginx 5/5 5 5 10m
# Detailed status
kubectl rollout status deployment/nginx

Pause and predict: You have a Deployment with 4 replicas, maxSurge: 1, and maxUnavailable: 0. During a rolling update, what is the maximum number of pods running at any point? What happens if the new version fails its readiness probe?

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 4
strategy:
type: RollingUpdate # Default strategy
rollingUpdate:
maxSurge: 1 # Max pods over desired during update
maxUnavailable: 1 # Max pods unavailable during update
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
┌────────────────────────────────────────────────────────────────┐
│ Rolling Update │
│ (maxSurge=1, maxUnavailable=1) │
│ │
│ Desired: 4 replicas │
│ │
│ Step 1: Start with old version │
│ [v1] [v1] [v1] [v1] │
│ │
│ Step 2: Create 1 new pod (maxSurge=1) │
│ [v1] [v1] [v1] [v1] [v2-creating] │
│ │
│ Step 3: v2 ready, terminate 1 old (maxUnavailable=1) │
│ [v1] [v1] [v1] [v2] [v1-terminating] │
│ │
│ Step 4: Continue rolling │
│ [v1] [v1] [v2] [v2] [v1-terminating] │
│ │
│ Step 5: Continue rolling │
│ [v1] [v2] [v2] [v2] [v1-terminating] │
│ │
│ Step 6: Complete │
│ [v2] [v2] [v2] [v2] │
│ │
└────────────────────────────────────────────────────────────────┘
Terminal window
# Update image (triggers rolling update)
kubectl set image deployment/nginx nginx=nginx:1.26
# Update with record (saves command in history)
kubectl set image deployment/nginx nginx=nginx:1.26 --record
# Update environment variable
kubectl set env deployment/nginx ENV=production
# Update resources
kubectl set resources deployment/nginx -c nginx --limits=cpu=200m,memory=512Mi
# Edit deployment (any change to pod template triggers update)
kubectl edit deployment nginx
Terminal window
# Watch rollout progress
kubectl rollout status deployment/nginx
# Watch pods during update
kubectl get pods -w
# Watch ReplicaSets
kubectl get rs -w

Exam Tip

During the exam, use kubectl set image for quick updates. It’s faster than editing YAML. Add --record to save the command in rollout history.


Terminal window
# View history
kubectl rollout history deployment/nginx
# View specific revision
kubectl rollout history deployment/nginx --revision=2
Terminal window
# Rollback to previous version
kubectl rollout undo deployment/nginx
# Rollback to specific revision
kubectl rollout undo deployment/nginx --to-revision=2
# Verify rollback
kubectl rollout status deployment/nginx
kubectl get deployment nginx -o wide
┌────────────────────────────────────────────────────────────────┐
│ Rollback Process │
│ │
│ Before Rollback: │
│ ┌─────────────────────────────────────────────────┐ │
│ │ ReplicaSet v1 (replicas: 0) ← old version │ │
│ │ ReplicaSet v2 (replicas: 4) ← current │ │
│ └─────────────────────────────────────────────────┘ │
│ │
│ kubectl rollout undo deployment/nginx │
│ │
│ After Rollback: │
│ ┌─────────────────────────────────────────────────┐ │
│ │ ReplicaSet v1 (replicas: 4) ← restored │ │
│ │ ReplicaSet v2 (replicas: 0) ← scaled down │ │
│ └─────────────────────────────────────────────────┘ │
│ │
│ Deployment keeps old ReplicaSets for rollback capability │
│ │
└────────────────────────────────────────────────────────────────┘
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
revisionHistoryLimit: 10 # Keep 10 old ReplicaSets (default)
# Set to 0 to disable rollback capability

War Story: The Accidental Production Outage

A team deployed a broken image to production. Panic ensued. The engineer who knew about kubectl rollout undo saved the day in seconds. The engineer who didn’t spent 20 minutes trying to figure out the previous image tag. Know your rollback commands!


Pause a deployment to:

  • Make multiple changes without triggering multiple rollouts
  • Batch updates together
  • Debug without new pods being created
Terminal window
# Pause deployment
kubectl rollout pause deployment/nginx
# Make multiple changes (no rollout triggered)
kubectl set image deployment/nginx nginx=nginx:1.26
kubectl set resources deployment/nginx -c nginx --limits=cpu=200m
kubectl set env deployment/nginx ENV=production
# Resume - triggers single rollout with all changes
kubectl rollout resume deployment/nginx
# Watch the rollout
kubectl rollout status deployment/nginx

Stop and think: You need to update a legacy application that writes to a shared file on a PersistentVolume. Running two versions simultaneously would corrupt the file. Would a RollingUpdate strategy work here? What strategy should you use instead, and what is the trade-off?

Use Recreate when:

  • Application can’t run multiple versions simultaneously
  • Database schema incompatibility between versions
  • Limited resources (can’t run extra pods)
apiVersion: apps/v1
kind: Deployment
metadata:
name: database
spec:
replicas: 1
strategy:
type: Recreate # All pods deleted, then new pods created
selector:
matchLabels:
app: database
template:
metadata:
labels:
app: database
spec:
containers:
- name: db
image: postgres:15
AspectRollingUpdateRecreate
DowntimeZero (if configured correctly)Yes
Resource usageHigher during updateSame
ComplexityHigherSimple
Use caseStateless appsStateful, incompatible versions

Terminal window
# View conditions
kubectl get deployment nginx -o jsonpath='{.status.conditions[*].type}'
# Detailed conditions
kubectl describe deployment nginx | grep -A10 Conditions
ConditionMeaning
AvailableMinimum replicas available
ProgressingRollout in progress
ReplicaFailureFailed to create pods

Stop and think: If a Deployment is stuck in a Progressing state but never becomes Available, where is the first place you should look to understand why the new pods aren’t starting?

When a rollout gets stuck, it’s usually because the new pods are failing to start or become ready. The RollingUpdate strategy pauses to prevent a full outage. Here is a concrete workflow to diagnose and recover from a stuck rollout.

Step 1: Deploy a broken update

Terminal window
# Update to an image tag that doesn't exist
kubectl set image deployment/nginx nginx=nginx:broken-tag
# The rollout will hang
kubectl rollout status deployment/nginx
# Output: Waiting for deployment "nginx" rollout to finish: 1 out of 3 new replicas have been updated...

(Press Ctrl+C to exit the hanging rollout status)

Step 2: Inspect the Deployment Start at the top level to see the status and conditions.

Terminal window
kubectl describe deployment nginx

Look at the Conditions and Events at the bottom of the output. You will likely see that the deployment is Progressing but lacking minimum availability.

Step 3: Check the ReplicaSets The Deployment creates a new ReplicaSet for the update. Let’s find it.

Terminal window
kubectl get replicasets -l app=nginx

You will see the old ReplicaSet with desired/current pods matching your previous state, and a new ReplicaSet with DESIRED 1, CURRENT 1, but READY 0.

Step 4: Inspect the Pod Events Find the specific pod that is failing to start.

Terminal window
kubectl get pods -l app=nginx
# Look for the pod in ImagePullBackOff or CrashLoopBackOff status

Describe the failing pod or check cluster events to identify the exact error.

Terminal window
# Describe the specific failing pod
kubectl describe pod <failing-pod-name>
# Or check recent events in the namespace
kubectl get events --sort-by='.metadata.creationTimestamp' | tail -n 10

In this scenario, you will see a Failed to pull image "nginx:broken-tag" event revealing the root cause.

Step 5: Safely Rollback Now that you have identified the root cause (a bad image tag), cancel the stuck rollout and restore the previous state.

Terminal window
kubectl rollout undo deployment/nginx
kubectl rollout status deployment/nginx

  • Deployments are declarative: You specify desired state, Kubernetes figures out how to get there.

  • ReplicaSets are immutable: When you update a Deployment, a new ReplicaSet is created. The old one is kept for rollback.

  • Default strategy is RollingUpdate with maxSurge: 25% and maxUnavailable: 25%.

  • --record is deprecated in newer versions but still works. Annotations now track changes automatically.


MistakeProblemSolution
Labels don’t match selectorDeployment doesn’t manage podsEnsure selector.matchLabels matches template.metadata.labels
Missing resource limitsPods can starve other workloadsAlways set requests and limits
Rolling back without checkingMay restore broken versionCheck rollout history --revision=N first
Using latest tagRollout may not triggerUse specific version tags
Not verifying rolloutAssuming successAlways run rollout status

  1. Your team pushed image api:v2.1 to a Deployment running api:v2.0 with 4 replicas. Five minutes later, users report 500 errors from about half their requests. You check and see 2 pods running v2.0 and 2 running v2.1 (one of which is in CrashLoopBackOff). What happened during the rolling update, and what should you do right now?

    Answer The rolling update is stuck because the v2.1 pods are crashing and failing readiness probes, so the rollout cannot proceed -- the Deployment controller waits for new pods to become Ready before terminating more old pods. This is actually the RollingUpdate strategy protecting you from a full outage. You should immediately run `kubectl rollout undo deployment/api` to roll back to v2.0, which restores all 4 replicas to the working version. Then investigate the v2.1 crash using `kubectl logs` and `kubectl describe pod` before attempting the update again.
  2. An engineer on your team wants to rollback a Deployment but isn’t sure which revision to target. When they run kubectl rollout history, they see revisions 1, 2, 4, and 5 (revision 3 is missing). Explain why revision 3 is gone and how to safely rollback to the version that was running two releases ago.

    Answer Revision 3 is "missing" because when you roll back to a previous revision, that revision gets renumbered to the latest revision number. For example, if you rolled back from revision 3 to revision 2, the old revision 2 became the new revision 4 (and original revision 3 was consumed). To find the right target, use `kubectl rollout history deployment/nginx --revision=2` to inspect each revision's pod template (image, env vars, resources). Once you identify the correct revision, run `kubectl rollout undo deployment/nginx --to-revision=N`. Always inspect before rolling back to avoid restoring a known-bad version.
  3. Your application writes to a shared database. During a RollingUpdate, both the old and new versions run simultaneously. A colleague suggests using Recreate strategy instead to avoid running two versions at once. What are the trade-offs, and is there a better approach that avoids downtime AND version conflicts?

    Answer The Recreate strategy terminates all old pods before creating new ones, which avoids running two versions simultaneously but causes complete downtime during the transition. For database-backed applications, a better approach is to make your application backward-compatible: design database migrations to work with both old and new code (e.g., add new columns but don't remove old ones until the next release). This lets you safely use RollingUpdate with zero downtime. If backward compatibility is truly impossible, Recreate is the right choice, but you should accept the downtime window and communicate it to users.
  4. During an on-call shift, you need to update a production Deployment’s image, resource limits, and environment variables. You’re worried about triggering three separate rollouts, which would churn pods unnecessarily. How do you batch all changes into a single rollout? Write the exact commands.

    Answer Use the pause/resume pattern to batch all changes into one atomic rollout: ```bash kubectl rollout pause deployment/nginx kubectl set image deployment/nginx nginx=nginx:1.26 kubectl set resources deployment/nginx -c nginx --limits=cpu=200m kubectl set env deployment/nginx ENV=production kubectl rollout resume deployment/nginx ``` While paused, the Deployment records all changes but does not create new pods. When you resume, a single rolling update applies all three changes at once. This is especially important in production to minimize pod churn and keep the rollout history clean (one revision instead of three).

Task: Complete deployment lifecycle—create, scale, update, rollback.

Steps:

  1. Create a deployment:
Terminal window
kubectl create deployment webapp --image=nginx:1.24 --replicas=3
kubectl rollout status deployment/webapp
  1. Verify deployment and ReplicaSet:
Terminal window
kubectl get deployment webapp
kubectl get replicaset
kubectl get pods -l app=webapp
  1. Scale the deployment:
Terminal window
kubectl scale deployment webapp --replicas=5
kubectl get pods -w # Watch pods scale up
  1. Update image (rolling update):
Terminal window
kubectl set image deployment/webapp nginx=nginx:1.25 --record
kubectl rollout status deployment/webapp
  1. Check rollout history:
Terminal window
kubectl rollout history deployment/webapp
kubectl get replicaset # Notice two ReplicaSets now
  1. Deploy a “bad” version:
Terminal window
kubectl set image deployment/webapp nginx=nginx:broken --record
kubectl rollout status deployment/webapp # Will hang or fail
kubectl get pods # Some in ImagePullBackOff
  1. Rollback to previous version:
Terminal window
kubectl rollout undo deployment/webapp
kubectl rollout status deployment/webapp
kubectl get pods # Back to healthy state
  1. Check history and rollback to specific revision:
Terminal window
kubectl rollout history deployment/webapp
kubectl rollout undo deployment/webapp --to-revision=1
kubectl rollout status deployment/webapp
  1. Cleanup:
Terminal window
kubectl delete deployment webapp

Success Criteria:

  • Can create deployments imperatively and declaratively
  • Understand Deployment → ReplicaSet → Pod hierarchy
  • Can scale deployments
  • Can perform rolling updates
  • Can rollback to previous versions
  • Understand rollout history

Drill 1: Deployment Creation Speed Test (Target: 2 minutes)

Section titled “Drill 1: Deployment Creation Speed Test (Target: 2 minutes)”
Terminal window
# Create deployment
kubectl create deployment nginx --image=nginx:1.25 --replicas=3
# Verify
kubectl rollout status deployment/nginx
kubectl get deploy nginx
kubectl get rs
kubectl get pods -l app=nginx
# Cleanup
kubectl delete deployment nginx

Drill 2: Rolling Update (Target: 3 minutes)

Section titled “Drill 2: Rolling Update (Target: 3 minutes)”
Terminal window
# Create deployment
kubectl create deployment web --image=nginx:1.24 --replicas=4
# Wait for ready
kubectl rollout status deployment/web
# Update image
kubectl set image deployment/web nginx=nginx:1.25
# Watch the rollout
kubectl rollout status deployment/web
# Verify new image
kubectl get deployment web -o jsonpath='{.spec.template.spec.containers[0].image}'
# Cleanup
kubectl delete deployment web
Terminal window
# Create deployment
kubectl create deployment app --image=nginx:1.24 --replicas=3
kubectl rollout status deployment/app
# Update 1
kubectl set image deployment/app nginx=nginx:1.25 --record
kubectl rollout status deployment/app
# Update 2 (bad version)
kubectl set image deployment/app nginx=nginx:bad --record
# Don't wait - it will fail
# Check history
kubectl rollout history deployment/app
# Rollback
kubectl rollout undo deployment/app
kubectl rollout status deployment/app
# Verify rolled back
kubectl get deployment app -o jsonpath='{.spec.template.spec.containers[0].image}'
# Should be nginx:1.25
# Cleanup
kubectl delete deployment app
Terminal window
# Create deployment
kubectl create deployment scale-test --image=nginx --replicas=2
# Scale up
kubectl scale deployment scale-test --replicas=5
kubectl get pods -l app=scale-test
# Scale down
kubectl scale deployment scale-test --replicas=1
kubectl get pods -l app=scale-test
# Scale to zero
kubectl scale deployment scale-test --replicas=0
kubectl get pods -l app=scale-test # No pods
# Scale back up
kubectl scale deployment scale-test --replicas=3
# Cleanup
kubectl delete deployment scale-test

Drill 5: Pause and Resume (Target: 3 minutes)

Section titled “Drill 5: Pause and Resume (Target: 3 minutes)”
Terminal window
# Create deployment
kubectl create deployment paused --image=nginx:1.24 --replicas=2
kubectl rollout status deployment/paused
# Pause
kubectl rollout pause deployment/paused
# Make multiple changes (no rollout triggered)
kubectl set image deployment/paused nginx=nginx:1.25
kubectl set env deployment/paused ENV=production
kubectl set resources deployment/paused -c nginx --requests=cpu=100m
# Check - still old image
kubectl get deployment paused -o jsonpath='{.spec.template.spec.containers[0].image}'
# Resume - single rollout
kubectl rollout resume deployment/paused
kubectl rollout status deployment/paused
# Verify all changes applied
kubectl get deployment paused -o yaml | grep -E "image:|ENV|cpu"
# Cleanup
kubectl delete deployment paused

Drill 6: Recreate Strategy (Target: 3 minutes)

Section titled “Drill 6: Recreate Strategy (Target: 3 minutes)”
Terminal window
# Create deployment with Recreate strategy
cat << 'EOF' | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: recreate-demo
spec:
replicas: 3
strategy:
type: Recreate
selector:
matchLabels:
app: recreate-demo
template:
metadata:
labels:
app: recreate-demo
spec:
containers:
- name: nginx
image: nginx:1.24
EOF
kubectl rollout status deployment/recreate-demo
# Update - watch all pods terminate then new ones create
kubectl set image deployment/recreate-demo nginx=nginx:1.25
# Watch pods (all old terminate, then all new create)
kubectl get pods -w -l app=recreate-demo
# Cleanup
kubectl delete deployment recreate-demo

Drill 7: YAML Generation and Modification (Target: 5 minutes)

Section titled “Drill 7: YAML Generation and Modification (Target: 5 minutes)”
Terminal window
# Generate YAML
kubectl create deployment myapp --image=nginx:1.25 --replicas=3 --dry-run=client -o yaml > myapp.yaml
# View generated YAML
cat myapp.yaml
# Add resource limits using sed or edit
cat << 'EOF' >> myapp.yaml
---
# Note: Need to edit the file properly, this is just for demonstration
EOF
# Apply the deployment
kubectl apply -f myapp.yaml
# Update via edit
kubectl edit deployment myapp
# Change replicas to 5, save
# Verify
kubectl get deployment myapp
# Cleanup
kubectl delete -f myapp.yaml
rm myapp.yaml

Without looking at solutions, complete this workflow in under 5 minutes:

  1. Create deployment lifecycle-test with nginx:1.24, 3 replicas
  2. Scale to 5 replicas
  3. Update to nginx:1.25
  4. Check rollout history
  5. Update to nginx:1.26
  6. Rollback to nginx:1.24 (revision 1)
  7. Delete deployment
Terminal window
# YOUR TASK: Complete the workflow
Solution
Terminal window
# 1. Create
kubectl create deployment lifecycle-test --image=nginx:1.24 --replicas=3
kubectl rollout status deployment/lifecycle-test
# 2. Scale
kubectl scale deployment lifecycle-test --replicas=5
# 3. Update to 1.25
kubectl set image deployment/lifecycle-test nginx=nginx:1.25 --record
kubectl rollout status deployment/lifecycle-test
# 4. Check history
kubectl rollout history deployment/lifecycle-test
# 5. Update to 1.26
kubectl set image deployment/lifecycle-test nginx=nginx:1.26 --record
kubectl rollout status deployment/lifecycle-test
# 6. Rollback to revision 1
kubectl rollout undo deployment/lifecycle-test --to-revision=1
kubectl rollout status deployment/lifecycle-test
# Verify it's 1.24
kubectl get deployment lifecycle-test -o jsonpath='{.spec.template.spec.containers[0].image}'
# 7. Delete
kubectl delete deployment lifecycle-test

Module 2.3: DaemonSets & StatefulSets - Specialized workload controllers.