Module 1.4: Kustomize - Template-Free Configuration
Complexity:
[MEDIUM]- Essential exam skill for 2025Time to Complete: 35-45 minutes
Prerequisites: Module 0.1 (working cluster), basic YAML knowledge
What You’ll Be Able to Do
Section titled “What You’ll Be Able to Do”After this module, you will be able to:
- Build Kustomize overlays for multi-environment deployments (dev, staging, production)
- Apply patches, name prefixes, labels, and resource transformations without modifying base manifests
- Compare Kustomize vs Helm and choose the right tool for different scenarios
- Debug Kustomize output by rendering manifests with
kubectl kustomizebefore applying
Why This Module Matters
Section titled “Why This Module Matters”Kustomize is new to the CKA 2025 curriculum. You will be tested on it.
Kustomize solves a common problem: you have the same application deployed to dev, staging, and production, but each environment needs slightly different configuration—different replicas, different resource limits, different image tags.
Without Kustomize, you’d either:
- Maintain separate YAML files for each environment (duplication nightmare)
- Use templates with placeholders (adds complexity)
Kustomize takes a different approach: overlay and patch. Start with a base, layer environment-specific changes on top. No templating. Pure YAML. Built into kubectl.
The Transparent Film Analogy
Think of Kustomize like transparent film overlays on a projector. Your base slide shows the application structure. For production, you overlay a film that adds “replicas: 10”. For dev, you overlay a film that changes the image tag. Each overlay modifies the base without duplicating it. Stack as many overlays as you need.
What You’ll Learn
Section titled “What You’ll Learn”By the end of this module, you’ll be able to:
- Create Kustomize bases and overlays
- Patch resources without modifying originals
- Use common transformations (labels, namespaces, prefixes)
- Generate ConfigMaps and Secrets from files
- Apply Kustomize configurations with kubectl
Part 1: Kustomize Concepts
Section titled “Part 1: Kustomize Concepts”1.1 Core Terminology
Section titled “1.1 Core Terminology”| Term | Definition |
|---|---|
| Base | Original, reusable resource definitions |
| Overlay | Environment-specific customizations |
| Patch | Partial YAML that modifies a resource |
| kustomization.yaml | Manifest that defines what to include and transform |
1.2 Directory Structure
Section titled “1.2 Directory Structure”myapp/├── base/ # Shared, reusable definitions│ ├── kustomization.yaml│ ├── deployment.yaml│ ├── service.yaml│ └── configmap.yaml│└── overlays/ # Environment-specific ├── dev/ │ ├── kustomization.yaml │ └── patch-replicas.yaml │ ├── staging/ │ ├── kustomization.yaml │ └── patch-resources.yaml │ └── production/ ├── kustomization.yaml ├── patch-replicas.yaml └── patch-resources.yaml1.3 How Kustomize Works
Section titled “1.3 How Kustomize Works”┌────────────────────────────────────────────────────────────────┐│ Kustomize Flow ││ ││ Base Resources Overlay Patches ││ ┌─────────────────┐ ┌─────────────────┐ ││ │ deployment.yaml │ │ patch-prod.yaml │ ││ │ replicas: 1 │ + │ replicas: 10 │ ││ │ image: v1 │ │ image: v2 │ ││ └─────────────────┘ └─────────────────┘ ││ │ │ ││ └──────────────┬───────────────┘ ││ │ ││ ▼ ││ ┌─────────────┐ ││ │ Kustomize │ ││ │ (merge) │ ││ └──────┬──────┘ ││ │ ││ ▼ ││ Final Output ││ ┌─────────────────┐ ││ │ deployment.yaml │ ││ │ replicas: 10 │ ││ │ image: v2 │ ││ └─────────────────┘ ││ │└────────────────────────────────────────────────────────────────┘Did You Know?
Kustomize is built into kubectl since v1.14. You don’t need to install anything extra—just use
kubectl apply -korkubectl kustomize. This is why it’s a CKA exam favorite: it works out of the box.
Part 2: Creating a Base
Section titled “Part 2: Creating a Base”2.1 The kustomization.yaml File
Section titled “2.1 The kustomization.yaml File”Every Kustomize directory needs a kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
resources: - deployment.yaml - service.yaml - configmap.yaml2.2 Base Resources
Section titled “2.2 Base Resources”apiVersion: apps/v1kind: Deploymentmetadata: name: myappspec: replicas: 1 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: nginx:1.25 ports: - containerPort: 80 resources: requests: memory: "64Mi" cpu: "100m"apiVersion: v1kind: Servicemetadata: name: myappspec: selector: app: myapp ports: - port: 80 targetPort: 802.3 Preview Base Output
Section titled “2.3 Preview Base Output”# See what the base produceskubectl kustomize base/
# Or using kustomize directlykustomize build base/Part 3: Creating Overlays
Section titled “Part 3: Creating Overlays”Pause and predict: You have a base Deployment with
replicas: 1and two overlays — dev and prod. If you apply the dev overlay, does the base file change? What happens if another team member applies the prod overlay at the same time from their machine?
3.1 Simple Overlay
Section titled “3.1 Simple Overlay”apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
resources: - ../../base # Reference the base
namePrefix: dev- # Prefix all resource namesnamespace: development # Put everything in this namespace
commonLabels: environment: dev # Add this label to all resources3.2 Production Overlay with Patches
Section titled “3.2 Production Overlay with Patches”apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
resources: - ../../base
namePrefix: prod-namespace: production
commonLabels: environment: production
patches: - path: patch-replicas.yaml - path: patch-resources.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: myapp # Must match the base resource namespec: replicas: 10 # Override replicasapiVersion: apps/v1kind: Deploymentmetadata: name: myappspec: template: spec: containers: - name: myapp resources: requests: memory: "256Mi" cpu: "500m" limits: memory: "512Mi" cpu: "1000m"3.3 Preview and Apply Overlays
Section titled “3.3 Preview and Apply Overlays”# Preview production overlaykubectl kustomize overlays/production/
# Apply to clusterkubectl apply -k overlays/production/
# Apply dev overlaykubectl apply -k overlays/dev/Part 4: Common Transformers
Section titled “Part 4: Common Transformers”4.1 namePrefix and nameSuffix
Section titled “4.1 namePrefix and nameSuffix”namePrefix: prod-nameSuffix: -v2
# Result: deployment "myapp" becomes "prod-myapp-v2"4.2 namespace
Section titled “4.2 namespace”namespace: production
# All resources get namespace: production4.3 commonLabels
Section titled “4.3 commonLabels”commonLabels: app.kubernetes.io/name: myapp app.kubernetes.io/env: production
# Added to ALL resources (metadata.labels AND selector)4.4 commonAnnotations
Section titled “4.4 commonAnnotations”commonAnnotations: team: platform oncall: platform@example.com
# Added to all resources' metadata.annotations4.5 images
Section titled “4.5 images”Change image names/tags without patching:
images: - name: nginx # Original image name newName: my-registry/nginx newTag: "2.0"
# Changes all nginx images to my-registry/nginx:2.0Part 5: Patching Strategies
Section titled “Part 5: Patching Strategies”5.1 Strategic Merge Patch (Default)
Section titled “5.1 Strategic Merge Patch (Default)”Merges your patch with the base:
apiVersion: apps/v1kind: Deploymentmetadata: name: myappspec: template: spec: containers: - name: sidecar # Added to existing containers image: busybox command: ["sleep", "infinity"]What would happen if: Your strategic merge patch references a container name
myappbut the base Deployment has a container namedapp. Will the patch fail, silently add a new container, or do something else?
5.2 JSON 6902 Patch
Section titled “5.2 JSON 6902 Patch”More precise control using JSON Patch syntax:
patches: - target: kind: Deployment name: myapp patch: |- - op: replace path: /spec/replicas value: 5 - op: add path: /metadata/annotations/patched value: "true"5.3 Patch Targeting
Section titled “5.3 Patch Targeting”Target specific resources:
patches: - path: patch-replicas.yaml target: kind: Deployment name: myappTarget by label:
patches: - path: patch-memory.yaml target: kind: Deployment labelSelector: "tier=frontend"Part 6: Generators
Section titled “Part 6: Generators”6.1 ConfigMap Generator
Section titled “6.1 ConfigMap Generator”Generate ConfigMaps from files or literals:
configMapGenerator: - name: app-config literals: - DATABASE_HOST=postgres - DATABASE_PORT=5432 files: - config.properties
# Creates ConfigMap with hashed name suffix# e.g., app-config-8h2k9d6.2 Secret Generator
Section titled “6.2 Secret Generator”secretGenerator: - name: db-credentials literals: - username=admin - password=secret123 type: Opaque
# Creates Secret with hashed name suffixStop and think: If you update a ConfigMap that’s already mounted in running pods, the pods won’t automatically restart to pick up changes. How does Kustomize’s ConfigMap generator solve this problem without requiring a manual pod restart?
6.3 Why Hashed Names?
Section titled “6.3 Why Hashed Names?”app-config-8h2k9d ^^^^^^ content hashWhen ConfigMap content changes, the hash changes, which changes the name. This triggers a rolling update of pods using the ConfigMap—they detect the new reference automatically.
6.4 Disabling Hash Suffixes
Section titled “6.4 Disabling Hash Suffixes”configMapGenerator: - name: app-config literals: - KEY=value
generatorOptions: disableNameSuffixHash: truePart 7: Real-World Example
Section titled “Part 7: Real-World Example”7.1 Full Directory Structure
Section titled “7.1 Full Directory Structure”webapp/├── base/│ ├── kustomization.yaml│ ├── deployment.yaml│ ├── service.yaml│ └── config/│ └── nginx.conf│└── overlays/ ├── dev/ │ └── kustomization.yaml └── prod/ ├── kustomization.yaml ├── patch-replicas.yaml └── secrets/ └── db-password.txt7.2 Base kustomization.yaml
Section titled “7.2 Base kustomization.yaml”apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
resources: - deployment.yaml - service.yaml
configMapGenerator: - name: nginx-config files: - config/nginx.conf7.3 Production Overlay
Section titled “7.3 Production Overlay”apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
resources: - ../../base
namespace: productionnamePrefix: prod-
commonLabels: environment: production
images: - name: nginx newTag: "1.25-alpine"
patches: - path: patch-replicas.yaml
secretGenerator: - name: db-credentials files: - password=secrets/db-password.txtPart 8: kubectl Integration
Section titled “Part 8: kubectl Integration”8.1 Essential Commands
Section titled “8.1 Essential Commands”# Preview kustomization outputkubectl kustomize <directory>
# Apply kustomization to clusterkubectl apply -k <directory>
# Delete resources from kustomizationkubectl delete -k <directory>
# Diff against current cluster statekubectl diff -k <directory>8.2 Exam-Ready Commands
Section titled “8.2 Exam-Ready Commands”# Quick apply for examkubectl apply -k overlays/production/
# Verify what was createdkubectl get all -n production
# If you need to debugkubectl kustomize overlays/production/ | kubectl apply --dry-run=client -f -Part 9: Kustomize vs Helm
Section titled “Part 9: Kustomize vs Helm”| Aspect | Kustomize | Helm |
|---|---|---|
| Approach | Overlay/patch | Template |
| Learning curve | Lower | Higher |
| Pure YAML | Yes | No (Go templates) |
| Package sharing | Directories | Charts |
| Rollback | Not built-in | Built-in |
| Best for | Config variants | Complex apps |
Use Kustomize when: You have your own manifests and need environment variations.
Use Helm when: You’re installing third-party applications or need templating.
Exam Tip
The CKA exam may ask you to use either Helm or Kustomize. Know both. For quick environment customization, Kustomize is faster to set up.
Did You Know?
Section titled “Did You Know?”-
Kustomize was a separate tool before being merged into kubectl. You can still install standalone
kustomizefor additional features. -
Argo CD and Flux (GitOps tools) natively understand Kustomize. Your overlay structure becomes your deployment strategy.
-
You can combine Helm and Kustomize. Generate manifests from Helm, then customize with Kustomize overlays.
Common Mistakes
Section titled “Common Mistakes”| Mistake | Problem | Solution |
|---|---|---|
| Wrong path to base | ”resource not found” | Use relative paths like ../../base |
| Forgetting kustomization.yaml | kubectl errors | Every directory needs one |
| Patch name mismatch | Patch not applied | Patch metadata.name must match base |
| Missing namespace | Resources in wrong ns | Add namespace: to overlay |
| commonLabels breaking selectors | Selector mismatch | Test carefully, labels affect selectors |
-
Your team has the same web application deployed to dev, staging, and production. A new developer copies the base Deployment YAML into three separate files and edits each one. What problem does this create, and how would you restructure it using Kustomize?
Answer
Copying creates a duplication nightmare. When the base Deployment needs a change (new health check, updated security context), you must remember to update all three copies — and inevitably one gets missed, causing environment drift. With Kustomize, you create a single `base/` directory with the shared Deployment, then create `overlays/dev/`, `overlays/staging/`, and `overlays/production/` directories. Each overlay has its own `kustomization.yaml` that references `../../base` and applies only the differences (replica count, image tag, resource limits, namespace). Changes to the base automatically propagate to all environments, and each overlay only contains what's different. -
During the CKA exam, you’re told to deploy an application using Kustomize to the
stagingnamespace with a name prefix ofstg-. You runkubectl apply -k overlays/staging/but get an error: “resource not found.” The base directory exists with valid YAML. What’s the most likely cause?Answer
The most likely cause is a wrong relative path in the overlay's `kustomization.yaml`. The `resources` field must correctly reference the base directory relative to the overlay's location. If your overlay is at `overlays/staging/kustomization.yaml`, the base reference should be `../../base`, not `../base` or `./base`. Run `kubectl kustomize overlays/staging/` to see the error details before applying — this renders the output without applying, making it easier to debug path issues. Also check that the base directory has its own `kustomization.yaml` file listing its resources, and that the overlay's `kustomization.yaml` has the correct `apiVersion` and `kind` fields. -
You update an application’s config file and re-apply your Kustomize overlay. The ConfigMap is updated, but existing pods are still using the old configuration. However, your colleague’s team using the same setup gets automatic pod restarts. What’s different about their Kustomize configuration?
Answer
Your colleague is using `configMapGenerator` in their `kustomization.yaml`, which appends a content-based hash suffix to the ConfigMap name (e.g., `app-config-8h2k9d`). When the config content changes, the hash changes, the ConfigMap name changes, and the Deployment's reference to it changes — triggering a rolling update. You're probably using a static ConfigMap listed under `resources`, which keeps the same name even when content changes. Kubernetes doesn't automatically restart pods when a mounted ConfigMap's content changes in-place. To get automatic restarts, switch to `configMapGenerator`. If you need to keep the static name for other reasons, you can use `generatorOptions: disableNameSuffixHash: true`, but then you lose the auto-restart behavior. -
A production incident requires you to urgently change the image tag from
v2.1tov2.0across all environments. With Helm, you’d runhelm rollback. What’s the equivalent approach with Kustomize, and what limitation does this reveal?Answer
Kustomize has no built-in rollback mechanism. You'd need to change the `images` transformer in your overlay's `kustomization.yaml` back to `newTag: "v2.0"` and re-apply with `kubectl apply -k overlays/production/`. Alternatively, if you're using Git (which you should be), you'd `git revert` or `git checkout` the previous commit and re-apply. This reveals a key limitation of Kustomize vs Helm: Kustomize doesn't track release history or versions. It's a rendering engine, not a release manager. The common solution is to pair Kustomize with a GitOps tool like Argo CD or Flux, which tracks Git history as the release history and can revert by syncing to a previous commit.
Hands-On Exercise
Section titled “Hands-On Exercise”Task: Create a Kustomize structure for a web application with dev and prod overlays.
Steps:
- Create directory structure:
mkdir -p webapp/base webapp/overlays/dev webapp/overlays/prod- Create base deployment:
cat > webapp/base/deployment.yaml << 'EOF'apiVersion: apps/v1kind: Deploymentmetadata: name: webappspec: replicas: 1 selector: matchLabels: app: webapp template: metadata: labels: app: webapp spec: containers: - name: webapp image: nginx:1.25 ports: - containerPort: 80EOF- Create base service:
cat > webapp/base/service.yaml << 'EOF'apiVersion: v1kind: Servicemetadata: name: webappspec: selector: app: webapp ports: - port: 80EOF- Create base kustomization:
cat > webapp/base/kustomization.yaml << 'EOF'apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationresources: - deployment.yaml - service.yamlEOF- Create dev overlay:
cat > webapp/overlays/dev/kustomization.yaml << 'EOF'apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationresources: - ../../basenamePrefix: dev-namespace: developmentcommonLabels: environment: devEOF- Create prod overlay with patch:
cat > webapp/overlays/prod/patch-replicas.yaml << 'EOF'apiVersion: apps/v1kind: Deploymentmetadata: name: webappspec: replicas: 5EOF
cat > webapp/overlays/prod/kustomization.yaml << 'EOF'apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationresources: - ../../basenamePrefix: prod-namespace: productioncommonLabels: environment: productionimages: - name: nginx newTag: "1.25-alpine"patches: - path: patch-replicas.yamlEOF- Preview and compare:
echo "=== DEV ===" && kubectl kustomize webapp/overlays/dev/echo "=== PROD ===" && kubectl kustomize webapp/overlays/prod/- Apply dev overlay:
kubectl create namespace developmentkubectl apply -k webapp/overlays/dev/kubectl get all -n development- Apply prod overlay:
kubectl create namespace productionkubectl apply -k webapp/overlays/prod/kubectl get all -n productionSuccess Criteria:
- Understand base vs overlay structure
- Can create kustomization.yaml files
- Can use namePrefix, namespace, commonLabels
- Can create and apply patches
- Can preview output with
kubectl kustomize
Cleanup:
kubectl delete -k webapp/overlays/dev/kubectl delete -k webapp/overlays/prod/kubectl delete namespace development productionrm -rf webapp/Practice Drills
Section titled “Practice Drills”Drill 1: Kustomize vs kubectl apply (Target: 2 minutes)
Section titled “Drill 1: Kustomize vs kubectl apply (Target: 2 minutes)”Understand the difference:
# Create basemkdir -p drill1/basecat << 'EOF' > drill1/base/deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: nginxspec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginxEOF
cat << 'EOF' > drill1/base/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationresources: - deployment.yamlEOF
# Preview vs applykubectl kustomize drill1/base/ # Just previewkubectl apply -k drill1/base/ # Actually applykubectl get deploy nginxkubectl delete -k drill1/base/rm -rf drill1Drill 2: Namespace Transformation (Target: 3 minutes)
Section titled “Drill 2: Namespace Transformation (Target: 3 minutes)”mkdir -p drill2/base drill2/overlays/devcat << 'EOF' > drill2/base/deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: appspec: replicas: 1 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: app image: nginxEOF
cat << 'EOF' > drill2/base/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationresources: - deployment.yamlEOF
cat << 'EOF' > drill2/overlays/dev/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationresources: - ../../basenamespace: dev-namespacenamePrefix: dev-EOF
# Preview - see the transformationskubectl kustomize drill2/overlays/dev/
# Applykubectl create namespace dev-namespacekubectl apply -k drill2/overlays/dev/kubectl get deploy -n dev-namespace # Shows dev-app
# Cleanupkubectl delete -k drill2/overlays/dev/kubectl delete namespace dev-namespacerm -rf drill2Drill 3: Image Transformation (Target: 3 minutes)
Section titled “Drill 3: Image Transformation (Target: 3 minutes)”mkdir -p drill3cat << 'EOF' > drill3/deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: webspec: replicas: 1 selector: matchLabels: app: web template: metadata: labels: app: web spec: containers: - name: web image: nginx:1.19EOF
cat << 'EOF' > drill3/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationresources: - deployment.yamlimages: - name: nginx newTag: "1.25"EOF
# Preview - notice image changed to nginx:1.25kubectl kustomize drill3/
# Apply and verifykubectl apply -k drill3/kubectl get deploy web -o jsonpath='{.spec.template.spec.containers[0].image}'# Output: nginx:1.25
# Cleanupkubectl delete -k drill3/rm -rf drill3Drill 4: Troubleshooting - Broken Kustomization (Target: 5 minutes)
Section titled “Drill 4: Troubleshooting - Broken Kustomization (Target: 5 minutes)”# Create broken kustomizationmkdir -p drill4cat << 'EOF' > drill4/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationresources: - deployment.yaml # File doesn't exist! - service.yaml # File doesn't exist!commonLabels: app: myappEOF
# Try to build - will failkubectl kustomize drill4/
# YOUR TASK: Fix by creating the missing filesSolution
cat << 'EOF' > drill4/deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: myappspec: replicas: 1 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: app image: nginxEOF
cat << 'EOF' > drill4/service.yamlapiVersion: v1kind: Servicemetadata: name: myappspec: selector: app: myapp ports: - port: 80EOF
# Now it workskubectl kustomize drill4/rm -rf drill4Drill 5: Strategic Merge Patch (Target: 5 minutes)
Section titled “Drill 5: Strategic Merge Patch (Target: 5 minutes)”mkdir -p drill5/base drill5/overlaycat << 'EOF' > drill5/base/deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: appspec: replicas: 1 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: app image: nginx resources: requests: memory: "64Mi" cpu: "100m"EOF
cat << 'EOF' > drill5/base/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationresources: - deployment.yamlEOF
# Create patch to increase resources for productioncat << 'EOF' > drill5/overlay/patch-resources.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: appspec: replicas: 3 template: spec: containers: - name: app resources: requests: memory: "256Mi" cpu: "500m" limits: memory: "512Mi" cpu: "1000m"EOF
cat << 'EOF' > drill5/overlay/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationresources: - ../basepatches: - path: patch-resources.yamlEOF
# Preview the resultkubectl kustomize drill5/overlay/rm -rf drill5Drill 6: ConfigMap Generator (Target: 3 minutes)
Section titled “Drill 6: ConfigMap Generator (Target: 3 minutes)”mkdir -p drill6cat << 'EOF' > drill6/app.propertiesDATABASE_URL=postgres://localhost:5432/mydbLOG_LEVEL=infoFEATURE_FLAG=enabledEOF
cat << 'EOF' > drill6/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1kind: KustomizationconfigMapGenerator: - name: app-config files: - app.properties literals: - EXTRA_KEY=extra-valueEOF
# Preview - notice ConfigMap with hash suffixkubectl kustomize drill6/rm -rf drill6Drill 7: Challenge - Multi-Environment Setup
Section titled “Drill 7: Challenge - Multi-Environment Setup”Create a complete Kustomize structure for 3 environments without looking at solutions:
Requirements:
- Base: nginx deployment, service
- Dev: 1 replica, namespace
dev, imagenginx:1.24 - Staging: 2 replicas, namespace
staging, imagenginx:1.25 - Prod: 5 replicas, namespace
production, imagenginx:1.25, add resource limits
mkdir -p challenge/{base,overlays/{dev,staging,prod}}# YOUR TASK: Create all kustomization.yaml and resource filesSolution Structure
challenge/├── base/│ ├── deployment.yaml│ ├── service.yaml│ └── kustomization.yaml├── overlays/│ ├── dev/│ │ └── kustomization.yaml│ ├── staging/│ │ └── kustomization.yaml│ └── prod/│ ├── kustomization.yaml│ └── patch-resources.yamlTest each: kubectl kustomize challenge/overlays/dev/
Next Module
Section titled “Next Module”Module 1.5: CRDs & Operators - Extending Kubernetes with Custom Resource Definitions.