Module 2.5: Resource Management
Complexity:
[MEDIUM]- Critical for production workloadsTime to Complete: 40-50 minutes
Prerequisites: Module 2.1 (Pods), Module 2.2 (Deployments)
What You’ll Be Able to Do
Section titled “What You’ll Be Able to Do”After this module, you will be able to:
- Configure resource requests and limits for CPU and memory and explain how they affect scheduling
- Implement LimitRanges and ResourceQuotas for namespace-level governance
- Diagnose resource-related failures (OOMKilled, CPU throttling, Pending due to insufficient resources)
- Design a resource strategy that balances cluster utilization with application reliability
Why This Module Matters
Section titled “Why This Module Matters”In production, containers compete for resources. Without proper configuration:
- A single pod can starve others
- Nodes become overcommitted
- Applications crash randomly
- Debugging becomes a nightmare
Resource management is essential for cluster stability. The CKA exam tests your understanding of requests, limits, QoS classes, and how they affect scheduling.
The Hotel Room Analogy
Think of a Kubernetes node like a hotel. Requests are like room reservations—guaranteed capacity you’ve booked. Limits are maximum occupancy rules—you can’t exceed them. Without reservations (requests), guests fight for rooms. Without limits, one party takes over the entire hotel. Good resource management ensures everyone gets what they need.
What You’ll Learn
Section titled “What You’ll Learn”By the end of this module, you’ll be able to:
- Configure CPU and memory requests and limits
- Understand how requests affect scheduling
- Understand how limits enforce boundaries
- Work with QoS classes
- Use LimitRanges and ResourceQuotas
- Resize pod resources in-place (K8s 1.35+)
Did You Know?
Section titled “Did You Know?”- In-Place Pod Resize is now GA: As of Kubernetes 1.35, you can change CPU and memory requests/limits on running pods without restarting them. This feature was alpha since K8s 1.27 and took 3 years to stabilize. Use
kubectl patchto resize a running pod — no downtime required.
Part 1: Requests and Limits
Section titled “Part 1: Requests and Limits”1.1 The Basics
Section titled “1.1 The Basics”apiVersion: v1kind: Podmetadata: name: resource-demospec: containers: - name: app image: nginx resources: requests: # Minimum guaranteed resources memory: "128Mi" cpu: "100m" limits: # Maximum allowed resources memory: "256Mi" cpu: "500m"1.2 Requests vs Limits
Section titled “1.2 Requests vs Limits”| Aspect | Requests | Limits |
|---|---|---|
| Purpose | Scheduling guarantee | Hard cap |
| When used | Scheduler deciding placement | Container runtime enforcement |
| Underutilized | Other pods can use slack | N/A |
| Exceeded | N/A | Container killed (memory) or throttled (CPU) |
┌────────────────────────────────────────────────────────────────┐│ Requests vs Limits ││ ││ Memory: 128Mi request, 256Mi limit ││ ││ 0 128Mi 256Mi Node Memory ││ ├─────────┼──────────┼───────────────────────────────────► ││ │ │ │ ││ │ Reserved│ Can grow │ OOMKilled if exceeded ││ │(guara- │ into this│ ││ │ nteed) │ space │ ││ ││ CPU: 100m request, 500m limit ││ ││ 0 100m 500m Node CPU ││ ├─────────┼──────────┼───────────────────────────────────► ││ │ │ │ ││ │ Reserved│ Can burst│ Throttled (not killed) ││ │ │ up to │ ││ │└────────────────────────────────────────────────────────────────┘1.3 Resource Units
Section titled “1.3 Resource Units”CPU (measured in cores):
| Value | Meaning |
|---|---|
1 | 1 CPU core |
1000m | 1 CPU core (millicores) |
100m | 0.1 CPU core (100 millicores) |
500m | 0.5 CPU core |
Memory (measured in bytes):
| Value | Meaning |
|---|---|
128Mi | 128 mebibytes (128 × 1024² bytes) |
1Gi | 1 gibibyte |
256M | 256 megabytes (256 × 1000² bytes) |
Gotcha:
Mi(mebibyte) ≠M(megabyte).128Mi= 134,217,728 bytes.128M= 128,000,000 bytes. UseMifor consistency.
Part 2: How Requests Affect Scheduling
Section titled “Part 2: How Requests Affect Scheduling”2.1 Scheduling Decision
Section titled “2.1 Scheduling Decision”The scheduler places pods on nodes with sufficient allocatable resources:
# Check node allocatable resourceskubectl describe node <node-name> | grep -A6 "Allocatable"
# Allocatable:# cpu: 2# memory: 4Gi# pods: 110┌────────────────────────────────────────────────────────────────┐│ Scheduling Decision ││ ││ Node Capacity: 4Gi memory ││ Already Requested: 3Gi ││ Available: 1Gi ││ ││ Pod A requests 2Gi memory ││ → Cannot schedule (2Gi > 1Gi available) ││ → Pod stays Pending ││ ││ Pod B requests 500Mi memory ││ → Can schedule (500Mi < 1Gi available) ││ → Pod placed on node ││ │└────────────────────────────────────────────────────────────────┘2.2 Pending Pods Due to Resources
Section titled “2.2 Pending Pods Due to Resources”# Create pod with huge requestkubectl run big-pod --image=nginx --requests="memory=100Gi"
# Check statuskubectl get pod big-pod# NAME READY STATUS RESTARTS AGE# big-pod 0/1 Pending 0 10s
# Check whykubectl describe pod big-pod | grep -A5 "Events"# Warning FailedScheduling Insufficient memory2.3 Resource Pressure
Section titled “2.3 Resource Pressure”# Check node resource pressurekubectl describe node <node-name> | grep -A10 "Conditions"# MemoryPressure False KubeletHasSufficientMemory# DiskPressure False KubeletHasNoDiskPressure# PIDPressure False KubeletHasSufficientPIDPart 3: How Limits Are Enforced
Section titled “Part 3: How Limits Are Enforced”Pause and predict: Two containers are running on the same node. Container A exceeds its CPU limit. Container B exceeds its memory limit. One of them gets killed; the other just slows down. Which is which, and why does Kubernetes treat CPU and memory differently?
3.1 CPU Limits (Throttling)
Section titled “3.1 CPU Limits (Throttling)”When a container exceeds CPU limits:
- CPU usage is throttled
- Process slows down but continues
- No container termination
# Container trying to use 2 CPUs with 500m limit# Gets throttled to 500m worth of CPU time3.2 Memory Limits (OOMKilled)
Section titled “3.2 Memory Limits (OOMKilled)”When a container exceeds memory limits:
- Container is killed (OOMKilled)
- Pod may restart based on restartPolicy
- You see
OOMKilledin pod status
# Check for OOMKilledkubectl describe pod <pod-name> | grep -A5 "Last State"# Last State: Terminated# Reason: OOMKilled# Exit Code: 137
# Check eventskubectl get events --field-selector reason=OOMKilling3.3 Memory Hog Demo
Section titled “3.3 Memory Hog Demo”# Pod that will be OOMKilledapiVersion: v1kind: Podmetadata: name: memory-hogspec: containers: - name: memory-hog image: polinux/stress command: ["stress"] args: ["--vm", "1", "--vm-bytes", "200M", "--vm-hang", "1"] resources: limits: memory: "100Mi" # Limit is less than 200M stress allocatesWar Story: The Silent Memory Leak
A team’s pod kept restarting randomly. No application errors in logs. They finally checked
kubectl describe podand sawOOMKilled. The app had a memory leak that slowly consumed memory until it hit the limit. Without the limit, it would have crashed the entire node. Limits saved the cluster, and logs revealed the problem.
Part 4: QoS Classes
Section titled “Part 4: QoS Classes”Pause and predict: A pod has
requests: {cpu: 100m, memory: 128Mi}andlimits: {memory: 256Mi}(no CPU limit). What QoS class will it get — Guaranteed, Burstable, or BestEffort? What happens if it tries to use 300Mi of memory?
4.1 The Three QoS Classes
Section titled “4.1 The Three QoS Classes”Kubernetes assigns QoS classes based on resource configuration:
| QoS Class | Condition | Eviction Priority |
|---|---|---|
| Guaranteed | requests = limits for all containers | Last (lowest priority) |
| Burstable | At least one request or limit set | Middle |
| BestEffort | No requests or limits | First (highest priority) |
4.2 Guaranteed
Section titled “4.2 Guaranteed”All containers must have requests = limits for both CPU and memory:
apiVersion: v1kind: Podmetadata: name: guaranteed-podspec: containers: - name: app image: nginx resources: requests: memory: "128Mi" cpu: "100m" limits: memory: "128Mi" # Same as request cpu: "100m" # Same as request# Check QoS classkubectl get pod guaranteed-pod -o jsonpath='{.status.qosClass}'# Guaranteed4.3 Burstable
Section titled “4.3 Burstable”At least one request or limit, but not Guaranteed:
apiVersion: v1kind: Podmetadata: name: burstable-podspec: containers: - name: app image: nginx resources: requests: memory: "128Mi" limits: memory: "256Mi" # Different from request4.4 BestEffort
Section titled “4.4 BestEffort”No resource specifications:
apiVersion: v1kind: Podmetadata: name: besteffort-podspec: containers: - name: app image: nginx # No resources section4.5 QoS and Eviction
Section titled “4.5 QoS and Eviction”When a node runs low on resources, kubelet evicts pods:
Eviction Order (first to last):1. BestEffort pods exceeding request2. Burstable pods exceeding request3. Burstable pods below request4. Guaranteed pods (last resort)Did You Know?
Even if you only set limits, Kubernetes automatically sets requests to the same value. So
limits: {memory: 128Mi}without requests makes it Guaranteed, not Burstable!
Part 5: LimitRanges
Section titled “Part 5: LimitRanges”5.1 What Is a LimitRange?
Section titled “5.1 What Is a LimitRange?”LimitRange sets default/min/max resource constraints per namespace:
apiVersion: v1kind: LimitRangemetadata: name: cpu-memory-limits namespace: developmentspec: limits: - type: Container default: # Default limits if not specified cpu: "500m" memory: "256Mi" defaultRequest: # Default requests if not specified cpu: "100m" memory: "128Mi" min: # Minimum allowed cpu: "50m" memory: "64Mi" max: # Maximum allowed cpu: "1" memory: "1Gi"5.2 LimitRange Effects
Section titled “5.2 LimitRange Effects”# Apply LimitRange to namespacekubectl apply -f limitrange.yaml
# Now create pod without resourceskubectl run test --image=nginx -n development
# Check - default resources were applied!kubectl get pod test -n development -o yaml | grep -A10 resources5.3 LimitRange Types
Section titled “5.3 LimitRange Types”| Type | Applies To |
|---|---|
Container | Individual containers |
Pod | Sum of all containers in pod |
PersistentVolumeClaim | PVC storage requests |
Part 6: ResourceQuotas
Section titled “Part 6: ResourceQuotas”Stop and think: You create a ResourceQuota in a namespace with
pods: 10andrequests.cpu: 4. A developer tries to create a pod without specifying any resource requests. Will it succeed? Why or why not?
6.1 What Is a ResourceQuota?
Section titled “6.1 What Is a ResourceQuota?”ResourceQuota limits total resources consumed in a namespace:
apiVersion: v1kind: ResourceQuotametadata: name: compute-quota namespace: developmentspec: hard: requests.cpu: "4" # Total CPU requests requests.memory: "8Gi" # Total memory requests limits.cpu: "8" # Total CPU limits limits.memory: "16Gi" # Total memory limits pods: "10" # Total number of pods persistentvolumeclaims: "5" # Total PVCs6.2 Checking Quota Usage
Section titled “6.2 Checking Quota Usage”# View quotakubectl get resourcequota -n development
# Detailed viewkubectl describe resourcequota compute-quota -n development# Name: compute-quota# Resource Used Hard# -------- ---- ----# limits.cpu 2 8# limits.memory 4Gi 16Gi# pods 5 106.3 Quota Enforcement
Section titled “6.3 Quota Enforcement”# If quota exceededkubectl run new-pod --image=nginx -n development# Error: exceeded quota: compute-quota, requested: pods=1, used: pods=10, limited: pods=10Exam Tip
If a ResourceQuota is set in a namespace, pods MUST specify resource requests/limits (or have LimitRange defaults). Otherwise, pod creation fails.
Part 7: Practical Resource Configuration
Section titled “Part 7: Practical Resource Configuration”7.1 Choosing Values
Section titled “7.1 Choosing Values”# 1. Profile your application# Run locally or in test environment to measure actual usage
# 2. Set requests slightly above average usage# Ensures pod gets scheduled
# 3. Set limits to handle bursts# Allow headroom for spikes but protect the node7.2 Common Patterns
Section titled “7.2 Common Patterns”| Application Type | Request | Limit | Ratio |
|---|---|---|---|
| Web server | 100m CPU, 128Mi | 500m CPU, 512Mi | 1:5, 1:4 |
| Background worker | 200m CPU, 256Mi | 1 CPU, 1Gi | 1:5, 1:4 |
| Database | 500m CPU, 1Gi | 2 CPU, 4Gi | 1:4, 1:4 |
| Cache | 100m CPU, 512Mi | 200m CPU, 1Gi | 1:2, 1:2 |
7.3 Commands for Resource Setting
Section titled “7.3 Commands for Resource Setting”# Create with resourceskubectl run nginx --image=nginx \ --requests="cpu=100m,memory=128Mi" \ --limits="cpu=500m,memory=256Mi"
# Update existing deploymentkubectl set resources deployment/nginx \ -c nginx \ --requests="cpu=100m,memory=128Mi" \ --limits="cpu=500m,memory=256Mi"
# Check resource usage (requires metrics-server)kubectl top podskubectl top nodesPart 8: Monitoring Resources
Section titled “Part 8: Monitoring Resources”8.1 kubectl top (Requires metrics-server)
Section titled “8.1 kubectl top (Requires metrics-server)”# Check node resource usagekubectl top nodes# NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%# node1 250m 12% 1200Mi 60%
# Check pod resource usagekubectl top podskubectl top pods -n kube-systemkubectl top pod nginx --containers8.2 Describe Commands
Section titled “8.2 Describe Commands”# Node capacity and allocatablekubectl describe node <node-name> | grep -A10 "Capacity"kubectl describe node <node-name> | grep -A10 "Allocatable"
# Node resource usage summarykubectl describe node <node-name> | grep -A10 "Allocated resources"Part 9: In-Place Pod Resource Resize (K8s 1.35+)
Section titled “Part 9: In-Place Pod Resource Resize (K8s 1.35+)”Starting with Kubernetes 1.35, you can resize CPU and memory on running pods without restarting them. This graduated to GA after 3 years of development.
9.1 Resize a Running Pod
Section titled “9.1 Resize a Running Pod”# Check current resourcesk get pod nginx -o jsonpath='{.spec.containers[0].resources}'
# Resize CPU and memory without restartk patch pod nginx --subresource resize --patch '{ "spec": { "containers": [{ "name": "nginx", "resources": { "requests": {"cpu": "200m", "memory": "256Mi"}, "limits": {"cpu": "500m", "memory": "512Mi"} } }] }}'
# Verify the resize was appliedk get pod nginx -o jsonpath='{.status.resize}'# Expected: "" (empty means resize completed)# If "InProgress": resize is being applied# If "Infeasible": node doesn't have enough resources9.2 Resize Policy
Section titled “9.2 Resize Policy”Containers can specify a resizePolicy to control whether resizes require a restart:
apiVersion: v1kind: Podmetadata: name: resize-demospec: containers: - name: app image: nginx resources: requests: cpu: 100m memory: 128Mi limits: cpu: 200m memory: 256Mi resizePolicy: - resourceName: cpu restartPolicy: NotRequired # CPU changes apply live - resourceName: memory restartPolicy: RestartContainer # Memory changes restart containerWhen to Use In-Place Resize
- Vertical scaling without downtime: Scale up during traffic spikes, scale down after
- Right-sizing: Adjust resources based on observed usage without redeploying
- Cost optimization: Reduce over-provisioned resources on running workloads
For automated resizing, use the Vertical Pod Autoscaler (VPA) which can now leverage in-place resize.
Common Mistakes
Section titled “Common Mistakes”| Mistake | Problem | Solution |
|---|---|---|
| No requests set | Scheduling ignores resource needs | Always set requests |
| Limits too low | Frequent OOMKills | Profile app and set appropriate limits |
| Requests = Limits (always) | No burst capacity | Allow buffer between request and limit |
Using M instead of Mi | Slightly off values | Use Mi and Gi consistently |
| No LimitRange in shared namespaces | Runaway pods | Set namespace defaults |
-
A developer’s pod keeps restarting with exit code 137. They’ve checked the application logs but see no errors — the process just stops.
kubectl describe podshowsLast State: Terminated, Reason: OOMKilled. The container’s memory limit is 256Mi. What is happening, and what are two ways to fix it?Answer
Exit code 137 means the process was killed by SIGKILL, and the OOMKilled reason confirms the container exceeded its 256Mi memory limit. The Linux kernel's OOM killer terminated the process because the container's memory usage surpassed what cgroups allow. There are no application-level error logs because the kill happens at the OS level, not within the application. Two fixes: (1) Increase the memory limit to accommodate actual usage (profile the app first with `kubectl top pod`), or (2) fix the memory leak in the application if usage grows unboundedly. A third option for Kubernetes 1.35+ is to use in-place pod resize to increase the limit without restarting. -
Your team’s Node.js application responds slowly during peak hours but
kubectl top podshows CPU usage at only 50m while the limit is 200m. However, the developer insists the app is CPU-bound. How can the CPU be throttled when usage appears to be well below the limit?Answer
CPU throttling can occur even when average usage appears low. `kubectl top` shows average CPU over a measurement window, but CPU throttling happens on a per-100ms time slice basis. The app might burst to 200m+ for brief moments (handling a request) and get throttled during those spikes, even though the average over the sampling period looks like 50m. This is a well-known issue with CPU limits -- they penalize bursty workloads. Solutions: increase the CPU limit to allow higher bursts, remove the CPU limit entirely (some teams do this, relying only on requests for scheduling), or investigate whether the app is single-threaded and bottlenecking on one core. -
You have three pods on the same node: Pod A (Guaranteed, using exactly its 512Mi request), Pod B (Burstable, using 800Mi against a 256Mi request), and Pod C (BestEffort, using 200Mi). The node enters memory pressure. In what order will the kubelet evict these pods, and why?
Answer
The kubelet evicts in QoS order: BestEffort first, then Burstable pods exceeding their requests, then Guaranteed pods. So Pod C (BestEffort, 200Mi, no guarantees) is evicted first. If pressure persists, Pod B (Burstable, using 800Mi against a 256Mi request -- 3x over its reservation) is evicted next. Pod A (Guaranteed, using exactly its request) is evicted last and only if the node is still critically low after evicting the other two. This ordering exists because BestEffort pods made no resource commitment, and Burstable pods exceeding their requests are "borrowing" capacity they didn't reserve. -
A new team joins your cluster and starts deploying pods without resource requests, consuming all available node resources. Other teams’ pods start getting evicted. Design a namespace-level governance strategy using LimitRange and ResourceQuota to prevent this from happening again.
Answer
Create both a LimitRange and ResourceQuota in the team's namespace. The LimitRange sets default requests/limits so pods without explicit resources still get sensible values (e.g., `defaultRequest: {cpu: 100m, memory: 128Mi}`, `default: {cpu: 500m, memory: 256Mi}`). Set `min` and `max` to prevent absurdly large or tiny pods. The ResourceQuota caps the total namespace consumption (e.g., `requests.cpu: 4`, `requests.memory: 8Gi`, `pods: 20`). With both in place, pods without resource specs get defaults from LimitRange, and total consumption is bounded by ResourceQuota. Note: when a ResourceQuota with compute constraints exists, all pods MUST have resource requests -- the LimitRange defaults ensure this requirement is met automatically.
Hands-On Exercise
Section titled “Hands-On Exercise”Task: Configure resources, test limits, understand QoS.
Steps:
- Create pod with resources:
kubectl run resource-test --image=nginx \ --requests="cpu=100m,memory=128Mi" \ --limits="cpu=200m,memory=256Mi"
kubectl get pod resource-test -o jsonpath='{.status.qosClass}'# Burstable (because requests ≠ limits)- Create Guaranteed pod:
cat << 'EOF' | kubectl apply -f -apiVersion: v1kind: Podmetadata: name: guaranteedspec: containers: - name: nginx image: nginx resources: requests: memory: "128Mi" cpu: "100m" limits: memory: "128Mi" cpu: "100m"EOF
kubectl get pod guaranteed -o jsonpath='{.status.qosClass}'# Guaranteed- Create BestEffort pod:
kubectl run besteffort --image=nginxkubectl get pod besteffort -o jsonpath='{.status.qosClass}'# BestEffort- Create LimitRange:
kubectl create namespace limits-test
cat << 'EOF' | kubectl apply -f -apiVersion: v1kind: LimitRangemetadata: name: default-limits namespace: limits-testspec: limits: - type: Container default: cpu: "200m" memory: "128Mi" defaultRequest: cpu: "100m" memory: "64Mi"EOF
# Create pod without resourceskubectl run test-defaults --image=nginx -n limits-test
# Check - defaults applied!kubectl get pod test-defaults -n limits-test -o yaml | grep -A8 resources- Test ResourceQuota:
cat << 'EOF' | kubectl apply -f -apiVersion: v1kind: ResourceQuotametadata: name: pod-quota namespace: limits-testspec: hard: pods: "2"EOF
# Create pods until quota exceededkubectl run pod1 --image=nginx -n limits-testkubectl run pod2 --image=nginx -n limits-testkubectl run pod3 --image=nginx -n limits-test # Should fail
kubectl describe resourcequota pod-quota -n limits-test- Cleanup:
kubectl delete pod resource-test guaranteed besteffortkubectl delete namespace limits-testSuccess Criteria:
- Can set requests and limits
- Understand difference between CPU and memory enforcement
- Can identify QoS classes
- Can create LimitRanges
- Can create ResourceQuotas
Practice Drills
Section titled “Practice Drills”Drill 1: Resource Creation (Target: 2 minutes)
Section titled “Drill 1: Resource Creation (Target: 2 minutes)”# Create pod with resourceskubectl run web --image=nginx \ --requests="cpu=100m,memory=128Mi" \ --limits="cpu=500m,memory=512Mi"
# Verifykubectl get pod web -o jsonpath='{.spec.containers[0].resources}'
# Check QoSkubectl get pod web -o jsonpath='{.status.qosClass}'
# Cleanupkubectl delete pod webDrill 2: QoS Class Identification (Target: 3 minutes)
Section titled “Drill 2: QoS Class Identification (Target: 3 minutes)”# Create three pods with different QoS classes
# Guaranteedcat << 'EOF' | kubectl apply -f -apiVersion: v1kind: Podmetadata: name: qos-guaranteedspec: containers: - name: app image: nginx resources: requests: cpu: "100m" memory: "100Mi" limits: cpu: "100m" memory: "100Mi"EOF
# Burstablekubectl run qos-burstable --image=nginx --requests="cpu=100m"
# BestEffortkubectl run qos-besteffort --image=nginx
# Check all QoS classesfor pod in qos-guaranteed qos-burstable qos-besteffort; do echo "$pod: $(kubectl get pod $pod -o jsonpath='{.status.qosClass}')"done
# Cleanupkubectl delete pod qos-guaranteed qos-burstable qos-besteffortDrill 3: LimitRange (Target: 5 minutes)
Section titled “Drill 3: LimitRange (Target: 5 minutes)”kubectl create namespace lr-test
cat << 'EOF' | kubectl apply -f -apiVersion: v1kind: LimitRangemetadata: name: mem-limit namespace: lr-testspec: limits: - type: Container default: memory: "256Mi" defaultRequest: memory: "128Mi" min: memory: "64Mi" max: memory: "1Gi"EOF
# Test default applicationkubectl run default-test --image=nginx -n lr-testkubectl get pod default-test -n lr-test -o jsonpath='{.spec.containers[0].resources}'
# Test exceeding maxcat << 'EOF' | kubectl apply -f -apiVersion: v1kind: Podmetadata: name: too-big namespace: lr-testspec: containers: - name: app image: nginx resources: limits: memory: "2Gi"EOF# Should fail: exceeds max
# Cleanupkubectl delete namespace lr-testDrill 4: ResourceQuota (Target: 5 minutes)
Section titled “Drill 4: ResourceQuota (Target: 5 minutes)”kubectl create namespace quota-test
cat << 'EOF' | kubectl apply -f -apiVersion: v1kind: ResourceQuotametadata: name: compute-quota namespace: quota-testspec: hard: requests.cpu: "1" requests.memory: "1Gi" limits.cpu: "2" limits.memory: "2Gi" pods: "3"EOF
# Check quotakubectl describe resourcequota compute-quota -n quota-test
# Create pods (need resources because quota exists)kubectl run pod1 --image=nginx -n quota-test --requests="cpu=200m,memory=256Mi"kubectl run pod2 --image=nginx -n quota-test --requests="cpu=200m,memory=256Mi"kubectl run pod3 --image=nginx -n quota-test --requests="cpu=200m,memory=256Mi"
# Try to exceedkubectl run pod4 --image=nginx -n quota-test --requests="cpu=200m,memory=256Mi"# Should fail: quota exceeded
# Check quota usagekubectl describe resourcequota compute-quota -n quota-test
# Cleanupkubectl delete namespace quota-testDrill 5: Resource Troubleshooting (Target: 5 minutes)
Section titled “Drill 5: Resource Troubleshooting (Target: 5 minutes)”# Create pod with insufficient resourceskubectl run pending-pod --image=nginx --requests="cpu=100,memory=100Gi"
# Check why it's pendingkubectl get pod pending-podkubectl describe pod pending-pod | grep -A5 "Events"
# Fix by reducing requestskubectl delete pod pending-podkubectl run pending-pod --image=nginx --requests="cpu=100m,memory=128Mi"
# Verify runningkubectl get pod pending-pod
# Cleanupkubectl delete pod pending-podDrill 6: Update Resources (Target: 3 minutes)
Section titled “Drill 6: Update Resources (Target: 3 minutes)”# Create deploymentkubectl create deployment resource-update --image=nginx --replicas=2
# Add resourceskubectl set resources deployment/resource-update \ --requests="cpu=100m,memory=128Mi" \ --limits="cpu=200m,memory=256Mi"
# Verify (pods will restart)kubectl get pods -l app=resource-update -w &sleep 10kill %1 2>/dev/null
kubectl describe deployment resource-update | grep -A10 "Resources"
# Cleanupkubectl delete deployment resource-updateDrill 7: Challenge - Complete Resource Setup
Section titled “Drill 7: Challenge - Complete Resource Setup”Create a namespace with:
- LimitRange: default 200m CPU, 256Mi memory; max 1 CPU, 1Gi memory
- ResourceQuota: max 4 pods, 2 CPU total requests, 4Gi total memory requests
- Deploy a 2-replica deployment with appropriate resources
# YOUR TASK: Complete this setupSolution
kubectl create namespace challenge
# LimitRangecat << 'EOF' | kubectl apply -f -apiVersion: v1kind: LimitRangemetadata: name: limits namespace: challengespec: limits: - type: Container default: cpu: "200m" memory: "256Mi" defaultRequest: cpu: "100m" memory: "128Mi" max: cpu: "1" memory: "1Gi"EOF
# ResourceQuotacat << 'EOF' | kubectl apply -f -apiVersion: v1kind: ResourceQuotametadata: name: quota namespace: challengespec: hard: pods: "4" requests.cpu: "2" requests.memory: "4Gi"EOF
# Deploymentkubectl create deployment app --image=nginx --replicas=2 -n challenge
# Verifykubectl get all -n challengekubectl describe resourcequota quota -n challenge
# Cleanupkubectl delete namespace challengeNext Module
Section titled “Next Module”Module 2.6: Scheduling - Node selection, affinity, taints, and tolerations.