Skip to content

Module 1.4: Kustomize - Template-Free Configuration

Hands-On Lab Available
K8s Cluster intermediate 35 min
Launch Lab ↗

Opens in Killercoda in a new tab

Complexity: [MEDIUM] - Essential exam skill for 2025

Time to Complete: 35-45 minutes

Prerequisites: Module 0.1 (working cluster), basic YAML knowledge


After this module, you will be able to:

  • Build Kustomize overlays for multi-environment deployments (dev, staging, production)
  • Apply patches, name prefixes, labels, and resource transformations without modifying base manifests
  • Compare Kustomize vs Helm and choose the right tool for different scenarios
  • Debug Kustomize output by rendering manifests with kubectl kustomize before applying

Kustomize is new to the CKA 2025 curriculum. You will be tested on it.

Kustomize solves a common problem: you have the same application deployed to dev, staging, and production, but each environment needs slightly different configuration—different replicas, different resource limits, different image tags.

Without Kustomize, you’d either:

  1. Maintain separate YAML files for each environment (duplication nightmare)
  2. Use templates with placeholders (adds complexity)

Kustomize takes a different approach: overlay and patch. Start with a base, layer environment-specific changes on top. No templating. Pure YAML. Built into kubectl.

The Transparent Film Analogy

Think of Kustomize like transparent film overlays on a projector. Your base slide shows the application structure. For production, you overlay a film that adds “replicas: 10”. For dev, you overlay a film that changes the image tag. Each overlay modifies the base without duplicating it. Stack as many overlays as you need.


By the end of this module, you’ll be able to:

  • Create Kustomize bases and overlays
  • Patch resources without modifying originals
  • Use common transformations (labels, namespaces, prefixes)
  • Generate ConfigMaps and Secrets from files
  • Apply Kustomize configurations with kubectl

TermDefinition
BaseOriginal, reusable resource definitions
OverlayEnvironment-specific customizations
PatchPartial YAML that modifies a resource
kustomization.yamlManifest that defines what to include and transform
myapp/
├── base/ # Shared, reusable definitions
│ ├── kustomization.yaml
│ ├── deployment.yaml
│ ├── service.yaml
│ └── configmap.yaml
└── overlays/ # Environment-specific
├── dev/
│ ├── kustomization.yaml
│ └── patch-replicas.yaml
├── staging/
│ ├── kustomization.yaml
│ └── patch-resources.yaml
└── production/
├── kustomization.yaml
├── patch-replicas.yaml
└── patch-resources.yaml
┌────────────────────────────────────────────────────────────────┐
│ Kustomize Flow │
│ │
│ Base Resources Overlay Patches │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ deployment.yaml │ │ patch-prod.yaml │ │
│ │ replicas: 1 │ + │ replicas: 10 │ │
│ │ image: v1 │ │ image: v2 │ │
│ └─────────────────┘ └─────────────────┘ │
│ │ │ │
│ └──────────────┬───────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ │
│ │ Kustomize │ │
│ │ (merge) │ │
│ └──────┬──────┘ │
│ │ │
│ ▼ │
│ Final Output │
│ ┌─────────────────┐ │
│ │ deployment.yaml │ │
│ │ replicas: 10 │ │
│ │ image: v2 │ │
│ └─────────────────┘ │
│ │
└────────────────────────────────────────────────────────────────┘

Did You Know?

Kustomize is built into kubectl since v1.14. You don’t need to install anything extra—just use kubectl apply -k or kubectl kustomize. This is why it’s a CKA exam favorite: it works out of the box.


Every Kustomize directory needs a kustomization.yaml:

base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
- configmap.yaml
base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: nginx:1.25
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "100m"
base/service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 80
Terminal window
# See what the base produces
kubectl kustomize base/
# Or using kustomize directly
kustomize build base/

Pause and predict: You have a base Deployment with replicas: 1 and two overlays — dev and prod. If you apply the dev overlay, does the base file change? What happens if another team member applies the prod overlay at the same time from their machine?

overlays/dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base # Reference the base
namePrefix: dev- # Prefix all resource names
namespace: development # Put everything in this namespace
commonLabels:
environment: dev # Add this label to all resources
overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
namePrefix: prod-
namespace: production
commonLabels:
environment: production
patches:
- path: patch-replicas.yaml
- path: patch-resources.yaml
overlays/production/patch-replicas.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp # Must match the base resource name
spec:
replicas: 10 # Override replicas
overlays/production/patch-resources.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
template:
spec:
containers:
- name: myapp
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1000m"
Terminal window
# Preview production overlay
kubectl kustomize overlays/production/
# Apply to cluster
kubectl apply -k overlays/production/
# Apply dev overlay
kubectl apply -k overlays/dev/

kustomization.yaml
namePrefix: prod-
nameSuffix: -v2
# Result: deployment "myapp" becomes "prod-myapp-v2"
kustomization.yaml
namespace: production
# All resources get namespace: production
kustomization.yaml
commonLabels:
app.kubernetes.io/name: myapp
app.kubernetes.io/env: production
# Added to ALL resources (metadata.labels AND selector)
kustomization.yaml
commonAnnotations:
team: platform
oncall: platform@example.com
# Added to all resources' metadata.annotations

Change image names/tags without patching:

kustomization.yaml
images:
- name: nginx # Original image name
newName: my-registry/nginx
newTag: "2.0"
# Changes all nginx images to my-registry/nginx:2.0

Merges your patch with the base:

patches/add-sidecar.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
template:
spec:
containers:
- name: sidecar # Added to existing containers
image: busybox
command: ["sleep", "infinity"]

What would happen if: Your strategic merge patch references a container name myapp but the base Deployment has a container named app. Will the patch fail, silently add a new container, or do something else?

More precise control using JSON Patch syntax:

kustomization.yaml
patches:
- target:
kind: Deployment
name: myapp
patch: |-
- op: replace
path: /spec/replicas
value: 5
- op: add
path: /metadata/annotations/patched
value: "true"

Target specific resources:

kustomization.yaml
patches:
- path: patch-replicas.yaml
target:
kind: Deployment
name: myapp

Target by label:

patches:
- path: patch-memory.yaml
target:
kind: Deployment
labelSelector: "tier=frontend"

Generate ConfigMaps from files or literals:

kustomization.yaml
configMapGenerator:
- name: app-config
literals:
- DATABASE_HOST=postgres
- DATABASE_PORT=5432
files:
- config.properties
# Creates ConfigMap with hashed name suffix
# e.g., app-config-8h2k9d
kustomization.yaml
secretGenerator:
- name: db-credentials
literals:
- username=admin
- password=secret123
type: Opaque
# Creates Secret with hashed name suffix

Stop and think: If you update a ConfigMap that’s already mounted in running pods, the pods won’t automatically restart to pick up changes. How does Kustomize’s ConfigMap generator solve this problem without requiring a manual pod restart?

app-config-8h2k9d
^^^^^^
content hash

When ConfigMap content changes, the hash changes, which changes the name. This triggers a rolling update of pods using the ConfigMap—they detect the new reference automatically.

kustomization.yaml
configMapGenerator:
- name: app-config
literals:
- KEY=value
generatorOptions:
disableNameSuffixHash: true

webapp/
├── base/
│ ├── kustomization.yaml
│ ├── deployment.yaml
│ ├── service.yaml
│ └── config/
│ └── nginx.conf
└── overlays/
├── dev/
│ └── kustomization.yaml
└── prod/
├── kustomization.yaml
├── patch-replicas.yaml
└── secrets/
└── db-password.txt
base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
configMapGenerator:
- name: nginx-config
files:
- config/nginx.conf
overlays/prod/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
namespace: production
namePrefix: prod-
commonLabels:
environment: production
images:
- name: nginx
newTag: "1.25-alpine"
patches:
- path: patch-replicas.yaml
secretGenerator:
- name: db-credentials
files:
- password=secrets/db-password.txt

Terminal window
# Preview kustomization output
kubectl kustomize <directory>
# Apply kustomization to cluster
kubectl apply -k <directory>
# Delete resources from kustomization
kubectl delete -k <directory>
# Diff against current cluster state
kubectl diff -k <directory>
Terminal window
# Quick apply for exam
kubectl apply -k overlays/production/
# Verify what was created
kubectl get all -n production
# If you need to debug
kubectl kustomize overlays/production/ | kubectl apply --dry-run=client -f -

AspectKustomizeHelm
ApproachOverlay/patchTemplate
Learning curveLowerHigher
Pure YAMLYesNo (Go templates)
Package sharingDirectoriesCharts
RollbackNot built-inBuilt-in
Best forConfig variantsComplex apps

Use Kustomize when: You have your own manifests and need environment variations.

Use Helm when: You’re installing third-party applications or need templating.

Exam Tip

The CKA exam may ask you to use either Helm or Kustomize. Know both. For quick environment customization, Kustomize is faster to set up.


  • Kustomize was a separate tool before being merged into kubectl. You can still install standalone kustomize for additional features.

  • Argo CD and Flux (GitOps tools) natively understand Kustomize. Your overlay structure becomes your deployment strategy.

  • You can combine Helm and Kustomize. Generate manifests from Helm, then customize with Kustomize overlays.


MistakeProblemSolution
Wrong path to base”resource not found”Use relative paths like ../../base
Forgetting kustomization.yamlkubectl errorsEvery directory needs one
Patch name mismatchPatch not appliedPatch metadata.name must match base
Missing namespaceResources in wrong nsAdd namespace: to overlay
commonLabels breaking selectorsSelector mismatchTest carefully, labels affect selectors

  1. Your team has the same web application deployed to dev, staging, and production. A new developer copies the base Deployment YAML into three separate files and edits each one. What problem does this create, and how would you restructure it using Kustomize?

    Answer Copying creates a duplication nightmare. When the base Deployment needs a change (new health check, updated security context), you must remember to update all three copies — and inevitably one gets missed, causing environment drift. With Kustomize, you create a single `base/` directory with the shared Deployment, then create `overlays/dev/`, `overlays/staging/`, and `overlays/production/` directories. Each overlay has its own `kustomization.yaml` that references `../../base` and applies only the differences (replica count, image tag, resource limits, namespace). Changes to the base automatically propagate to all environments, and each overlay only contains what's different.
  2. During the CKA exam, you’re told to deploy an application using Kustomize to the staging namespace with a name prefix of stg-. You run kubectl apply -k overlays/staging/ but get an error: “resource not found.” The base directory exists with valid YAML. What’s the most likely cause?

    Answer The most likely cause is a wrong relative path in the overlay's `kustomization.yaml`. The `resources` field must correctly reference the base directory relative to the overlay's location. If your overlay is at `overlays/staging/kustomization.yaml`, the base reference should be `../../base`, not `../base` or `./base`. Run `kubectl kustomize overlays/staging/` to see the error details before applying — this renders the output without applying, making it easier to debug path issues. Also check that the base directory has its own `kustomization.yaml` file listing its resources, and that the overlay's `kustomization.yaml` has the correct `apiVersion` and `kind` fields.
  3. You update an application’s config file and re-apply your Kustomize overlay. The ConfigMap is updated, but existing pods are still using the old configuration. However, your colleague’s team using the same setup gets automatic pod restarts. What’s different about their Kustomize configuration?

    Answer Your colleague is using `configMapGenerator` in their `kustomization.yaml`, which appends a content-based hash suffix to the ConfigMap name (e.g., `app-config-8h2k9d`). When the config content changes, the hash changes, the ConfigMap name changes, and the Deployment's reference to it changes — triggering a rolling update. You're probably using a static ConfigMap listed under `resources`, which keeps the same name even when content changes. Kubernetes doesn't automatically restart pods when a mounted ConfigMap's content changes in-place. To get automatic restarts, switch to `configMapGenerator`. If you need to keep the static name for other reasons, you can use `generatorOptions: disableNameSuffixHash: true`, but then you lose the auto-restart behavior.
  4. A production incident requires you to urgently change the image tag from v2.1 to v2.0 across all environments. With Helm, you’d run helm rollback. What’s the equivalent approach with Kustomize, and what limitation does this reveal?

    Answer Kustomize has no built-in rollback mechanism. You'd need to change the `images` transformer in your overlay's `kustomization.yaml` back to `newTag: "v2.0"` and re-apply with `kubectl apply -k overlays/production/`. Alternatively, if you're using Git (which you should be), you'd `git revert` or `git checkout` the previous commit and re-apply. This reveals a key limitation of Kustomize vs Helm: Kustomize doesn't track release history or versions. It's a rendering engine, not a release manager. The common solution is to pair Kustomize with a GitOps tool like Argo CD or Flux, which tracks Git history as the release history and can revert by syncing to a previous commit.

Task: Create a Kustomize structure for a web application with dev and prod overlays.

Steps:

  1. Create directory structure:
Terminal window
mkdir -p webapp/base webapp/overlays/dev webapp/overlays/prod
  1. Create base deployment:
Terminal window
cat > webapp/base/deployment.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nginx:1.25
ports:
- containerPort: 80
EOF
  1. Create base service:
Terminal window
cat > webapp/base/service.yaml << 'EOF'
apiVersion: v1
kind: Service
metadata:
name: webapp
spec:
selector:
app: webapp
ports:
- port: 80
EOF
  1. Create base kustomization:
Terminal window
cat > webapp/base/kustomization.yaml << 'EOF'
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
EOF
  1. Create dev overlay:
Terminal window
cat > webapp/overlays/dev/kustomization.yaml << 'EOF'
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
namePrefix: dev-
namespace: development
commonLabels:
environment: dev
EOF
  1. Create prod overlay with patch:
Terminal window
cat > webapp/overlays/prod/patch-replicas.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 5
EOF
cat > webapp/overlays/prod/kustomization.yaml << 'EOF'
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
namePrefix: prod-
namespace: production
commonLabels:
environment: production
images:
- name: nginx
newTag: "1.25-alpine"
patches:
- path: patch-replicas.yaml
EOF
  1. Preview and compare:
Terminal window
echo "=== DEV ===" && kubectl kustomize webapp/overlays/dev/
echo "=== PROD ===" && kubectl kustomize webapp/overlays/prod/
  1. Apply dev overlay:
Terminal window
kubectl create namespace development
kubectl apply -k webapp/overlays/dev/
kubectl get all -n development
  1. Apply prod overlay:
Terminal window
kubectl create namespace production
kubectl apply -k webapp/overlays/prod/
kubectl get all -n production

Success Criteria:

  • Understand base vs overlay structure
  • Can create kustomization.yaml files
  • Can use namePrefix, namespace, commonLabels
  • Can create and apply patches
  • Can preview output with kubectl kustomize

Cleanup:

Terminal window
kubectl delete -k webapp/overlays/dev/
kubectl delete -k webapp/overlays/prod/
kubectl delete namespace development production
rm -rf webapp/

Drill 1: Kustomize vs kubectl apply (Target: 2 minutes)

Section titled “Drill 1: Kustomize vs kubectl apply (Target: 2 minutes)”

Understand the difference:

Terminal window
# Create base
mkdir -p drill1/base
cat << 'EOF' > drill1/base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
EOF
cat << 'EOF' > drill1/base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
EOF
# Preview vs apply
kubectl kustomize drill1/base/ # Just preview
kubectl apply -k drill1/base/ # Actually apply
kubectl get deploy nginx
kubectl delete -k drill1/base/
rm -rf drill1

Drill 2: Namespace Transformation (Target: 3 minutes)

Section titled “Drill 2: Namespace Transformation (Target: 3 minutes)”
Terminal window
mkdir -p drill2/base drill2/overlays/dev
cat << 'EOF' > drill2/base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: nginx
EOF
cat << 'EOF' > drill2/base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
EOF
cat << 'EOF' > drill2/overlays/dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
namespace: dev-namespace
namePrefix: dev-
EOF
# Preview - see the transformations
kubectl kustomize drill2/overlays/dev/
# Apply
kubectl create namespace dev-namespace
kubectl apply -k drill2/overlays/dev/
kubectl get deploy -n dev-namespace # Shows dev-app
# Cleanup
kubectl delete -k drill2/overlays/dev/
kubectl delete namespace dev-namespace
rm -rf drill2

Drill 3: Image Transformation (Target: 3 minutes)

Section titled “Drill 3: Image Transformation (Target: 3 minutes)”
Terminal window
mkdir -p drill3
cat << 'EOF' > drill3/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: nginx:1.19
EOF
cat << 'EOF' > drill3/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
images:
- name: nginx
newTag: "1.25"
EOF
# Preview - notice image changed to nginx:1.25
kubectl kustomize drill3/
# Apply and verify
kubectl apply -k drill3/
kubectl get deploy web -o jsonpath='{.spec.template.spec.containers[0].image}'
# Output: nginx:1.25
# Cleanup
kubectl delete -k drill3/
rm -rf drill3

Drill 4: Troubleshooting - Broken Kustomization (Target: 5 minutes)

Section titled “Drill 4: Troubleshooting - Broken Kustomization (Target: 5 minutes)”
Terminal window
# Create broken kustomization
mkdir -p drill4
cat << 'EOF' > drill4/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml # File doesn't exist!
- service.yaml # File doesn't exist!
commonLabels:
app: myapp
EOF
# Try to build - will fail
kubectl kustomize drill4/
# YOUR TASK: Fix by creating the missing files
Solution
Terminal window
cat << 'EOF' > drill4/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: nginx
EOF
cat << 'EOF' > drill4/service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
app: myapp
ports:
- port: 80
EOF
# Now it works
kubectl kustomize drill4/
rm -rf drill4

Drill 5: Strategic Merge Patch (Target: 5 minutes)

Section titled “Drill 5: Strategic Merge Patch (Target: 5 minutes)”
Terminal window
mkdir -p drill5/base drill5/overlay
cat << 'EOF' > drill5/base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: nginx
resources:
requests:
memory: "64Mi"
cpu: "100m"
EOF
cat << 'EOF' > drill5/base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
EOF
# Create patch to increase resources for production
cat << 'EOF' > drill5/overlay/patch-resources.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 3
template:
spec:
containers:
- name: app
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1000m"
EOF
cat << 'EOF' > drill5/overlay/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patches:
- path: patch-resources.yaml
EOF
# Preview the result
kubectl kustomize drill5/overlay/
rm -rf drill5

Drill 6: ConfigMap Generator (Target: 3 minutes)

Section titled “Drill 6: ConfigMap Generator (Target: 3 minutes)”
Terminal window
mkdir -p drill6
cat << 'EOF' > drill6/app.properties
DATABASE_URL=postgres://localhost:5432/mydb
LOG_LEVEL=info
FEATURE_FLAG=enabled
EOF
cat << 'EOF' > drill6/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: app-config
files:
- app.properties
literals:
- EXTRA_KEY=extra-value
EOF
# Preview - notice ConfigMap with hash suffix
kubectl kustomize drill6/
rm -rf drill6

Drill 7: Challenge - Multi-Environment Setup

Section titled “Drill 7: Challenge - Multi-Environment Setup”

Create a complete Kustomize structure for 3 environments without looking at solutions:

Requirements:

  • Base: nginx deployment, service
  • Dev: 1 replica, namespace dev, image nginx:1.24
  • Staging: 2 replicas, namespace staging, image nginx:1.25
  • Prod: 5 replicas, namespace production, image nginx:1.25, add resource limits
Terminal window
mkdir -p challenge/{base,overlays/{dev,staging,prod}}
# YOUR TASK: Create all kustomization.yaml and resource files
Solution Structure
challenge/
├── base/
│ ├── deployment.yaml
│ ├── service.yaml
│ └── kustomization.yaml
├── overlays/
│ ├── dev/
│ │ └── kustomization.yaml
│ ├── staging/
│ │ └── kustomization.yaml
│ └── prod/
│ ├── kustomization.yaml
│ └── patch-resources.yaml

Test each: kubectl kustomize challenge/overlays/dev/


Module 1.5: CRDs & Operators - Extending Kubernetes with Custom Resource Definitions.