Module 2.2: Deployments & ReplicaSets
Complexity:
[MEDIUM]- Core exam topicTime to Complete: 45-55 minutes
Prerequisites: Module 2.1 (Pods)
What You’ll Be Able to Do
Section titled “What You’ll Be Able to Do”After this module, you will be able to:
- Create Deployments with rolling update strategy and configure maxSurge/maxUnavailable
- Perform rollouts, rollbacks, and history inspection under CKA time pressure
- Diagnose a stuck rollout by checking ReplicaSet status, pod events, and resource availability
- Explain the Deployment → ReplicaSet → Pod ownership chain and why old ReplicaSets are retained
Why This Module Matters
Section titled “Why This Module Matters”In production, you never run standalone pods. You use Deployments.
Deployments are the most common workload resource. They handle:
- Running multiple replicas of your app
- Rolling updates with zero downtime
- Automatic rollbacks when things go wrong
- Scaling up and down
The CKA exam tests creating deployments, performing rolling updates, scaling, and rollbacks. These are fundamental skills you’ll use daily.
The Fleet Manager Analogy
Think of a Deployment like a fleet manager for a taxi company. The manager doesn’t drive taxis directly—they manage drivers (pods). If a driver calls in sick (pod crashes), the manager assigns a replacement. If demand increases (scale up), the manager hires more drivers. During a vehicle upgrade (rolling update), the manager swaps old cars for new ones gradually, ensuring customers always have rides available.
What You’ll Learn
Section titled “What You’ll Learn”By the end of this module, you’ll be able to:
- Create and manage Deployments
- Understand how ReplicaSets work
- Perform rolling updates and rollbacks
- Scale applications horizontally
- Pause and resume deployments
Part 1: Deployment Fundamentals
Section titled “Part 1: Deployment Fundamentals”1.1 The Deployment Hierarchy
Section titled “1.1 The Deployment Hierarchy”┌────────────────────────────────────────────────────────────────┐│ Deployment Hierarchy ││ ││ ┌─────────────────────────────────────────────────────────┐ ││ │ Deployment │ ││ │ - Desired state (replicas, image, strategy) │ ││ │ - Manages ReplicaSets │ ││ └────────────────────────┬────────────────────────────────┘ ││ │ creates & manages ││ ▼ ││ ┌─────────────────────────────────────────────────────────┐ ││ │ ReplicaSet │ ││ │ - Ensures N replicas running │ ││ │ - Creates/deletes pods to match desired count │ ││ └────────────────────────┬────────────────────────────────┘ ││ │ creates & manages ││ ▼ ││ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ││ │ Pod 1 │ │ Pod 2 │ │ Pod 3 │ │ Pod N │ ││ └─────────┘ └─────────┘ └─────────┘ └─────────┘ ││ │└────────────────────────────────────────────────────────────────┘1.2 Why Not Just ReplicaSets?
Section titled “1.2 Why Not Just ReplicaSets?”| Feature | ReplicaSet | Deployment |
|---|---|---|
| Maintain replica count | ✅ | ✅ |
| Rolling updates | ❌ | ✅ |
| Rollback | ❌ | ✅ |
| Update history | ❌ | ✅ |
| Pause/Resume | ❌ | ✅ |
Rule: Always use Deployments. Never create ReplicaSets directly.
1.3 Deployment Spec
Section titled “1.3 Deployment Spec”apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-deployment labels: app: nginxspec: replicas: 3 # Desired pod count selector: # How to find pods to manage matchLabels: app: nginx template: # Pod template metadata: labels: app: nginx # Must match selector spec: containers: - name: nginx image: nginx:1.25 ports: - containerPort: 80Critical: The
spec.selector.matchLabelsmust matchspec.template.metadata.labels. If they don’t match, the Deployment won’t manage the pods.
Part 2: Creating Deployments
Section titled “Part 2: Creating Deployments”2.1 Imperative Commands (Fast for Exam)
Section titled “2.1 Imperative Commands (Fast for Exam)”# Create deploymentkubectl create deployment nginx --image=nginx
# Create with specific replicaskubectl create deployment nginx --image=nginx --replicas=3
# Create with portkubectl create deployment nginx --image=nginx --port=80
# Generate YAML (essential for exam!)kubectl create deployment nginx --image=nginx --replicas=3 --dry-run=client -o yaml > deploy.yaml2.2 From YAML
Section titled “2.2 From YAML”apiVersion: apps/v1kind: Deploymentmetadata: name: nginxspec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.25 ports: - containerPort: 80 resources: requests: cpu: 100m memory: 128Mi limits: cpu: 200m memory: 256Mikubectl apply -f nginx-deployment.yaml2.3 Viewing Deployments
Section titled “2.3 Viewing Deployments”# List deploymentskubectl get deploymentskubectl get deploy # Short form
# Detailed viewkubectl get deploy -o wide
# Describe deploymentkubectl describe deployment nginx
# Get deployment YAMLkubectl get deployment nginx -o yaml
# Check rollout statuskubectl rollout status deployment/nginxDid You Know?
The
kubectl rollout statuscommand blocks until the rollout completes. It’s perfect for CI/CD pipelines—if the rollout fails, the command exits with a non-zero status.
Part 3: ReplicaSets Under the Hood
Section titled “Part 3: ReplicaSets Under the Hood”3.1 How ReplicaSets Work
Section titled “3.1 How ReplicaSets Work”When you create a Deployment:
- Deployment controller creates a ReplicaSet
- ReplicaSet controller creates pods
- ReplicaSet ensures desired replicas match actual
# Create a deploymentkubectl create deployment nginx --image=nginx --replicas=3
# See the ReplicaSet createdkubectl get replicasets# NAME DESIRED CURRENT READY AGE# nginx-5d5dd5d5fb 3 3 3 30s
# See pods with owner referencekubectl get pods --show-labelsPause and predict: After updating a Deployment’s image twice (v1 -> v2 -> v3), how many ReplicaSets will exist? What are their replica counts? Why does Kubernetes keep the old ones?
3.2 ReplicaSet Naming
Section titled “3.2 ReplicaSet Naming”nginx-5d5dd5d5fb^ ^| || └── Hash of pod template|└── Deployment nameWhen you update the deployment, a new ReplicaSet is created with a different hash.
3.3 Don’t Manage ReplicaSets Directly
Section titled “3.3 Don’t Manage ReplicaSets Directly”# Don't do this - let Deployment manage ReplicaSetskubectl scale replicaset nginx-5d5dd5d5fb --replicas=5 # BAD
# Do this insteadkubectl scale deployment nginx --replicas=5 # GOODPart 4: Scaling
Section titled “Part 4: Scaling”4.1 Manual Scaling
Section titled “4.1 Manual Scaling”# Scale to specific replicaskubectl scale deployment nginx --replicas=5
# Scale to zero (stop all pods)kubectl scale deployment nginx --replicas=0
# Scale multiple deploymentskubectl scale deployment nginx webapp --replicas=34.2 Editing Deployment
Section titled “4.2 Editing Deployment”# Edit deployment directlykubectl edit deployment nginx# Change spec.replicas and save
# Or patchkubectl patch deployment nginx -p '{"spec":{"replicas":5}}'4.3 Verifying Scale
Section titled “4.3 Verifying Scale”# Watch pods scalekubectl get pods -w
# Check deployment statuskubectl get deployment nginx# NAME READY UP-TO-DATE AVAILABLE AGE# nginx 5/5 5 5 10m
# Detailed statuskubectl rollout status deployment/nginxPart 5: Rolling Updates
Section titled “Part 5: Rolling Updates”Pause and predict: You have a Deployment with 4 replicas,
maxSurge: 1, andmaxUnavailable: 0. During a rolling update, what is the maximum number of pods running at any point? What happens if the new version fails its readiness probe?
5.1 Update Strategy
Section titled “5.1 Update Strategy”apiVersion: apps/v1kind: Deploymentmetadata: name: nginxspec: replicas: 4 strategy: type: RollingUpdate # Default strategy rollingUpdate: maxSurge: 1 # Max pods over desired during update maxUnavailable: 1 # Max pods unavailable during update selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.255.2 Rolling Update Visualization
Section titled “5.2 Rolling Update Visualization”┌────────────────────────────────────────────────────────────────┐│ Rolling Update ││ (maxSurge=1, maxUnavailable=1) ││ ││ Desired: 4 replicas ││ ││ Step 1: Start with old version ││ [v1] [v1] [v1] [v1] ││ ││ Step 2: Create 1 new pod (maxSurge=1) ││ [v1] [v1] [v1] [v1] [v2-creating] ││ ││ Step 3: v2 ready, terminate 1 old (maxUnavailable=1) ││ [v1] [v1] [v1] [v2] [v1-terminating] ││ ││ Step 4: Continue rolling ││ [v1] [v1] [v2] [v2] [v1-terminating] ││ ││ Step 5: Continue rolling ││ [v1] [v2] [v2] [v2] [v1-terminating] ││ ││ Step 6: Complete ││ [v2] [v2] [v2] [v2] ││ │└────────────────────────────────────────────────────────────────┘5.3 Triggering Updates
Section titled “5.3 Triggering Updates”# Update image (triggers rolling update)kubectl set image deployment/nginx nginx=nginx:1.26
# Update with record (saves command in history)kubectl set image deployment/nginx nginx=nginx:1.26 --record
# Update environment variablekubectl set env deployment/nginx ENV=production
# Update resourceskubectl set resources deployment/nginx -c nginx --limits=cpu=200m,memory=512Mi
# Edit deployment (any change to pod template triggers update)kubectl edit deployment nginx5.4 Watching Updates
Section titled “5.4 Watching Updates”# Watch rollout progresskubectl rollout status deployment/nginx
# Watch pods during updatekubectl get pods -w
# Watch ReplicaSetskubectl get rs -wExam Tip
During the exam, use
kubectl set imagefor quick updates. It’s faster than editing YAML. Add--recordto save the command in rollout history.
Part 6: Rollbacks
Section titled “Part 6: Rollbacks”6.1 View Rollout History
Section titled “6.1 View Rollout History”# View historykubectl rollout history deployment/nginx
# View specific revisionkubectl rollout history deployment/nginx --revision=26.2 Performing Rollback
Section titled “6.2 Performing Rollback”# Rollback to previous versionkubectl rollout undo deployment/nginx
# Rollback to specific revisionkubectl rollout undo deployment/nginx --to-revision=2
# Verify rollbackkubectl rollout status deployment/nginxkubectl get deployment nginx -o wide6.3 How Rollback Works
Section titled “6.3 How Rollback Works”┌────────────────────────────────────────────────────────────────┐│ Rollback Process ││ ││ Before Rollback: ││ ┌─────────────────────────────────────────────────┐ ││ │ ReplicaSet v1 (replicas: 0) ← old version │ ││ │ ReplicaSet v2 (replicas: 4) ← current │ ││ └─────────────────────────────────────────────────┘ ││ ││ kubectl rollout undo deployment/nginx ││ ││ After Rollback: ││ ┌─────────────────────────────────────────────────┐ ││ │ ReplicaSet v1 (replicas: 4) ← restored │ ││ │ ReplicaSet v2 (replicas: 0) ← scaled down │ ││ └─────────────────────────────────────────────────┘ ││ ││ Deployment keeps old ReplicaSets for rollback capability ││ │└────────────────────────────────────────────────────────────────┘6.4 Controlling History
Section titled “6.4 Controlling History”apiVersion: apps/v1kind: Deploymentmetadata: name: nginxspec: revisionHistoryLimit: 10 # Keep 10 old ReplicaSets (default) # Set to 0 to disable rollback capabilityWar Story: The Accidental Production Outage
A team deployed a broken image to production. Panic ensued. The engineer who knew about
kubectl rollout undosaved the day in seconds. The engineer who didn’t spent 20 minutes trying to figure out the previous image tag. Know your rollback commands!
Part 7: Pause and Resume
Section titled “Part 7: Pause and Resume”7.1 Why Pause?
Section titled “7.1 Why Pause?”Pause a deployment to:
- Make multiple changes without triggering multiple rollouts
- Batch updates together
- Debug without new pods being created
7.2 Using Pause/Resume
Section titled “7.2 Using Pause/Resume”# Pause deploymentkubectl rollout pause deployment/nginx
# Make multiple changes (no rollout triggered)kubectl set image deployment/nginx nginx=nginx:1.26kubectl set resources deployment/nginx -c nginx --limits=cpu=200mkubectl set env deployment/nginx ENV=production
# Resume - triggers single rollout with all changeskubectl rollout resume deployment/nginx
# Watch the rolloutkubectl rollout status deployment/nginxPart 8: Recreate Strategy
Section titled “Part 8: Recreate Strategy”Stop and think: You need to update a legacy application that writes to a shared file on a PersistentVolume. Running two versions simultaneously would corrupt the file. Would a RollingUpdate strategy work here? What strategy should you use instead, and what is the trade-off?
8.1 When to Use Recreate
Section titled “8.1 When to Use Recreate”Use Recreate when:
- Application can’t run multiple versions simultaneously
- Database schema incompatibility between versions
- Limited resources (can’t run extra pods)
apiVersion: apps/v1kind: Deploymentmetadata: name: databasespec: replicas: 1 strategy: type: Recreate # All pods deleted, then new pods created selector: matchLabels: app: database template: metadata: labels: app: database spec: containers: - name: db image: postgres:158.2 Recreate vs RollingUpdate
Section titled “8.2 Recreate vs RollingUpdate”| Aspect | RollingUpdate | Recreate |
|---|---|---|
| Downtime | Zero (if configured correctly) | Yes |
| Resource usage | Higher during update | Same |
| Complexity | Higher | Simple |
| Use case | Stateless apps | Stateful, incompatible versions |
Part 9: Deployment Conditions
Section titled “Part 9: Deployment Conditions”9.1 Checking Conditions
Section titled “9.1 Checking Conditions”# View conditionskubectl get deployment nginx -o jsonpath='{.status.conditions[*].type}'
# Detailed conditionskubectl describe deployment nginx | grep -A10 Conditions9.2 Common Conditions
Section titled “9.2 Common Conditions”| Condition | Meaning |
|---|---|
Available | Minimum replicas available |
Progressing | Rollout in progress |
ReplicaFailure | Failed to create pods |
9.3 Diagnosing Stuck Rollouts
Section titled “9.3 Diagnosing Stuck Rollouts”Stop and think: If a Deployment is stuck in a
Progressingstate but never becomesAvailable, where is the first place you should look to understand why the new pods aren’t starting?
When a rollout gets stuck, it’s usually because the new pods are failing to start or become ready. The RollingUpdate strategy pauses to prevent a full outage. Here is a concrete workflow to diagnose and recover from a stuck rollout.
Step 1: Deploy a broken update
# Update to an image tag that doesn't existkubectl set image deployment/nginx nginx=nginx:broken-tag
# The rollout will hangkubectl rollout status deployment/nginx# Output: Waiting for deployment "nginx" rollout to finish: 1 out of 3 new replicas have been updated...(Press Ctrl+C to exit the hanging rollout status)
Step 2: Inspect the Deployment Start at the top level to see the status and conditions.
kubectl describe deployment nginxLook at the Conditions and Events at the bottom of the output. You will likely see that the deployment is Progressing but lacking minimum availability.
Step 3: Check the ReplicaSets The Deployment creates a new ReplicaSet for the update. Let’s find it.
kubectl get replicasets -l app=nginxYou will see the old ReplicaSet with desired/current pods matching your previous state, and a new ReplicaSet with DESIRED 1, CURRENT 1, but READY 0.
Step 4: Inspect the Pod Events Find the specific pod that is failing to start.
kubectl get pods -l app=nginx# Look for the pod in ImagePullBackOff or CrashLoopBackOff statusDescribe the failing pod or check cluster events to identify the exact error.
# Describe the specific failing podkubectl describe pod <failing-pod-name>
# Or check recent events in the namespacekubectl get events --sort-by='.metadata.creationTimestamp' | tail -n 10In this scenario, you will see a Failed to pull image "nginx:broken-tag" event revealing the root cause.
Step 5: Safely Rollback Now that you have identified the root cause (a bad image tag), cancel the stuck rollout and restore the previous state.
kubectl rollout undo deployment/nginxkubectl rollout status deployment/nginxDid You Know?
Section titled “Did You Know?”-
Deployments are declarative: You specify desired state, Kubernetes figures out how to get there.
-
ReplicaSets are immutable: When you update a Deployment, a new ReplicaSet is created. The old one is kept for rollback.
-
Default strategy is RollingUpdate with
maxSurge: 25%andmaxUnavailable: 25%. -
--recordis deprecated in newer versions but still works. Annotations now track changes automatically.
Common Mistakes
Section titled “Common Mistakes”| Mistake | Problem | Solution |
|---|---|---|
| Labels don’t match selector | Deployment doesn’t manage pods | Ensure selector.matchLabels matches template.metadata.labels |
| Missing resource limits | Pods can starve other workloads | Always set requests and limits |
| Rolling back without checking | May restore broken version | Check rollout history --revision=N first |
Using latest tag | Rollout may not trigger | Use specific version tags |
| Not verifying rollout | Assuming success | Always run rollout status |
-
Your team pushed image
api:v2.1to a Deployment runningapi:v2.0with 4 replicas. Five minutes later, users report 500 errors from about half their requests. You check and see 2 pods running v2.0 and 2 running v2.1 (one of which is in CrashLoopBackOff). What happened during the rolling update, and what should you do right now?Answer
The rolling update is stuck because the v2.1 pods are crashing and failing readiness probes, so the rollout cannot proceed -- the Deployment controller waits for new pods to become Ready before terminating more old pods. This is actually the RollingUpdate strategy protecting you from a full outage. You should immediately run `kubectl rollout undo deployment/api` to roll back to v2.0, which restores all 4 replicas to the working version. Then investigate the v2.1 crash using `kubectl logs` and `kubectl describe pod` before attempting the update again. -
An engineer on your team wants to rollback a Deployment but isn’t sure which revision to target. When they run
kubectl rollout history, they see revisions 1, 2, 4, and 5 (revision 3 is missing). Explain why revision 3 is gone and how to safely rollback to the version that was running two releases ago.Answer
Revision 3 is "missing" because when you roll back to a previous revision, that revision gets renumbered to the latest revision number. For example, if you rolled back from revision 3 to revision 2, the old revision 2 became the new revision 4 (and original revision 3 was consumed). To find the right target, use `kubectl rollout history deployment/nginx --revision=2` to inspect each revision's pod template (image, env vars, resources). Once you identify the correct revision, run `kubectl rollout undo deployment/nginx --to-revision=N`. Always inspect before rolling back to avoid restoring a known-bad version. -
Your application writes to a shared database. During a RollingUpdate, both the old and new versions run simultaneously. A colleague suggests using Recreate strategy instead to avoid running two versions at once. What are the trade-offs, and is there a better approach that avoids downtime AND version conflicts?
Answer
The Recreate strategy terminates all old pods before creating new ones, which avoids running two versions simultaneously but causes complete downtime during the transition. For database-backed applications, a better approach is to make your application backward-compatible: design database migrations to work with both old and new code (e.g., add new columns but don't remove old ones until the next release). This lets you safely use RollingUpdate with zero downtime. If backward compatibility is truly impossible, Recreate is the right choice, but you should accept the downtime window and communicate it to users. -
During an on-call shift, you need to update a production Deployment’s image, resource limits, and environment variables. You’re worried about triggering three separate rollouts, which would churn pods unnecessarily. How do you batch all changes into a single rollout? Write the exact commands.
Answer
Use the pause/resume pattern to batch all changes into one atomic rollout: ```bash kubectl rollout pause deployment/nginx kubectl set image deployment/nginx nginx=nginx:1.26 kubectl set resources deployment/nginx -c nginx --limits=cpu=200m kubectl set env deployment/nginx ENV=production kubectl rollout resume deployment/nginx ``` While paused, the Deployment records all changes but does not create new pods. When you resume, a single rolling update applies all three changes at once. This is especially important in production to minimize pod churn and keep the rollout history clean (one revision instead of three).
Hands-On Exercise
Section titled “Hands-On Exercise”Task: Complete deployment lifecycle—create, scale, update, rollback.
Steps:
- Create a deployment:
kubectl create deployment webapp --image=nginx:1.24 --replicas=3kubectl rollout status deployment/webapp- Verify deployment and ReplicaSet:
kubectl get deployment webappkubectl get replicasetkubectl get pods -l app=webapp- Scale the deployment:
kubectl scale deployment webapp --replicas=5kubectl get pods -w # Watch pods scale up- Update image (rolling update):
kubectl set image deployment/webapp nginx=nginx:1.25 --recordkubectl rollout status deployment/webapp- Check rollout history:
kubectl rollout history deployment/webappkubectl get replicaset # Notice two ReplicaSets now- Deploy a “bad” version:
kubectl set image deployment/webapp nginx=nginx:broken --recordkubectl rollout status deployment/webapp # Will hang or failkubectl get pods # Some in ImagePullBackOff- Rollback to previous version:
kubectl rollout undo deployment/webappkubectl rollout status deployment/webappkubectl get pods # Back to healthy state- Check history and rollback to specific revision:
kubectl rollout history deployment/webappkubectl rollout undo deployment/webapp --to-revision=1kubectl rollout status deployment/webapp- Cleanup:
kubectl delete deployment webappSuccess Criteria:
- Can create deployments imperatively and declaratively
- Understand Deployment → ReplicaSet → Pod hierarchy
- Can scale deployments
- Can perform rolling updates
- Can rollback to previous versions
- Understand rollout history
Practice Drills
Section titled “Practice Drills”Drill 1: Deployment Creation Speed Test (Target: 2 minutes)
Section titled “Drill 1: Deployment Creation Speed Test (Target: 2 minutes)”# Create deploymentkubectl create deployment nginx --image=nginx:1.25 --replicas=3
# Verifykubectl rollout status deployment/nginxkubectl get deploy nginxkubectl get rskubectl get pods -l app=nginx
# Cleanupkubectl delete deployment nginxDrill 2: Rolling Update (Target: 3 minutes)
Section titled “Drill 2: Rolling Update (Target: 3 minutes)”# Create deploymentkubectl create deployment web --image=nginx:1.24 --replicas=4
# Wait for readykubectl rollout status deployment/web
# Update imagekubectl set image deployment/web nginx=nginx:1.25
# Watch the rolloutkubectl rollout status deployment/web
# Verify new imagekubectl get deployment web -o jsonpath='{.spec.template.spec.containers[0].image}'
# Cleanupkubectl delete deployment webDrill 3: Rollback (Target: 3 minutes)
Section titled “Drill 3: Rollback (Target: 3 minutes)”# Create deploymentkubectl create deployment app --image=nginx:1.24 --replicas=3kubectl rollout status deployment/app
# Update 1kubectl set image deployment/app nginx=nginx:1.25 --recordkubectl rollout status deployment/app
# Update 2 (bad version)kubectl set image deployment/app nginx=nginx:bad --record# Don't wait - it will fail
# Check historykubectl rollout history deployment/app
# Rollbackkubectl rollout undo deployment/appkubectl rollout status deployment/app
# Verify rolled backkubectl get deployment app -o jsonpath='{.spec.template.spec.containers[0].image}'# Should be nginx:1.25
# Cleanupkubectl delete deployment appDrill 4: Scaling (Target: 2 minutes)
Section titled “Drill 4: Scaling (Target: 2 minutes)”# Create deploymentkubectl create deployment scale-test --image=nginx --replicas=2
# Scale upkubectl scale deployment scale-test --replicas=5kubectl get pods -l app=scale-test
# Scale downkubectl scale deployment scale-test --replicas=1kubectl get pods -l app=scale-test
# Scale to zerokubectl scale deployment scale-test --replicas=0kubectl get pods -l app=scale-test # No pods
# Scale back upkubectl scale deployment scale-test --replicas=3
# Cleanupkubectl delete deployment scale-testDrill 5: Pause and Resume (Target: 3 minutes)
Section titled “Drill 5: Pause and Resume (Target: 3 minutes)”# Create deploymentkubectl create deployment paused --image=nginx:1.24 --replicas=2kubectl rollout status deployment/paused
# Pausekubectl rollout pause deployment/paused
# Make multiple changes (no rollout triggered)kubectl set image deployment/paused nginx=nginx:1.25kubectl set env deployment/paused ENV=productionkubectl set resources deployment/paused -c nginx --requests=cpu=100m
# Check - still old imagekubectl get deployment paused -o jsonpath='{.spec.template.spec.containers[0].image}'
# Resume - single rolloutkubectl rollout resume deployment/pausedkubectl rollout status deployment/paused
# Verify all changes appliedkubectl get deployment paused -o yaml | grep -E "image:|ENV|cpu"
# Cleanupkubectl delete deployment pausedDrill 6: Recreate Strategy (Target: 3 minutes)
Section titled “Drill 6: Recreate Strategy (Target: 3 minutes)”# Create deployment with Recreate strategycat << 'EOF' | kubectl apply -f -apiVersion: apps/v1kind: Deploymentmetadata: name: recreate-demospec: replicas: 3 strategy: type: Recreate selector: matchLabels: app: recreate-demo template: metadata: labels: app: recreate-demo spec: containers: - name: nginx image: nginx:1.24EOF
kubectl rollout status deployment/recreate-demo
# Update - watch all pods terminate then new ones createkubectl set image deployment/recreate-demo nginx=nginx:1.25
# Watch pods (all old terminate, then all new create)kubectl get pods -w -l app=recreate-demo
# Cleanupkubectl delete deployment recreate-demoDrill 7: YAML Generation and Modification (Target: 5 minutes)
Section titled “Drill 7: YAML Generation and Modification (Target: 5 minutes)”# Generate YAMLkubectl create deployment myapp --image=nginx:1.25 --replicas=3 --dry-run=client -o yaml > myapp.yaml
# View generated YAMLcat myapp.yaml
# Add resource limits using sed or editcat << 'EOF' >> myapp.yaml---# Note: Need to edit the file properly, this is just for demonstrationEOF
# Apply the deploymentkubectl apply -f myapp.yaml
# Update via editkubectl edit deployment myapp# Change replicas to 5, save
# Verifykubectl get deployment myapp
# Cleanupkubectl delete -f myapp.yamlrm myapp.yamlDrill 8: Challenge - Complete Lifecycle
Section titled “Drill 8: Challenge - Complete Lifecycle”Without looking at solutions, complete this workflow in under 5 minutes:
- Create deployment
lifecycle-testwith nginx:1.24, 3 replicas - Scale to 5 replicas
- Update to nginx:1.25
- Check rollout history
- Update to nginx:1.26
- Rollback to nginx:1.24 (revision 1)
- Delete deployment
# YOUR TASK: Complete the workflowSolution
# 1. Createkubectl create deployment lifecycle-test --image=nginx:1.24 --replicas=3kubectl rollout status deployment/lifecycle-test
# 2. Scalekubectl scale deployment lifecycle-test --replicas=5
# 3. Update to 1.25kubectl set image deployment/lifecycle-test nginx=nginx:1.25 --recordkubectl rollout status deployment/lifecycle-test
# 4. Check historykubectl rollout history deployment/lifecycle-test
# 5. Update to 1.26kubectl set image deployment/lifecycle-test nginx=nginx:1.26 --recordkubectl rollout status deployment/lifecycle-test
# 6. Rollback to revision 1kubectl rollout undo deployment/lifecycle-test --to-revision=1kubectl rollout status deployment/lifecycle-test
# Verify it's 1.24kubectl get deployment lifecycle-test -o jsonpath='{.spec.template.spec.containers[0].image}'
# 7. Deletekubectl delete deployment lifecycle-testNext Module
Section titled “Next Module”Module 2.3: DaemonSets & StatefulSets - Specialized workload controllers.