Module 2.4: Helm & Kustomize
Toolkit Track | Complexity:
[MEDIUM]| Time: 35-40 min
The DevOps lead scrolled through the pull request with growing horror. Someone had copy-pasted the production Kubernetes manifests to create a staging environment, changing “prod” to “staging” in 47 different places. When she checked the git history, she found that the same 15 YAML files had been duplicated across dev, staging, QA, and production—with drift between them causing mysterious bugs for months. “This is why we can’t deploy on Fridays,” she muttered. Three weeks later, after migrating to Helm charts with Kustomize overlays, their deployment cadence went from once a week to 12 deployments per day, and configuration drift incidents dropped to zero. The VP of Engineering later calculated the wasted developer hours: $420,000 per year in debugging time caused by copy-paste YAML.
Prerequisites
Section titled “Prerequisites”Before starting this module:
- Module 2.1: ArgoCD or Module 2.3: Flux
- Basic Kubernetes YAML knowledge
- Understanding of templating concepts
What You’ll Be Able to Do
Section titled “What You’ll Be Able to Do”After completing this module, you will be able to:
- Configure Helm charts with values files for multi-environment deployments (dev, staging, production)
- Implement Kustomize overlays to patch Kubernetes manifests without modifying base configurations
- Integrate Helm and Kustomize together for template rendering with environment-specific patches
- Evaluate when to use Helm charts versus Kustomize overlays based on team and project requirements
Why This Module Matters
Section titled “Why This Module Matters”Raw Kubernetes YAML doesn’t scale. When you have 50 services, each with development, staging, and production variants, you need a way to manage configuration. Helm and Kustomize are the two dominant solutions—and they work together beautifully.
Helm packages applications as charts with templates. Kustomize overlays modifications without templates. Understanding both—and when to use each—is essential for Kubernetes operations.
Did You Know?
Section titled “Did You Know?”- Helm v3 removed Tiller entirely—Helm v2’s server-side component was a security concern; now Helm is purely client-side
- Kustomize is built into kubectl—since 1.14, you can use
kubectl apply -kwithout installing anything - The name “Helm” follows the Kubernetes nautical theme—a helm steers a ship, Helm steers your deployments
- Kustomize was created by Google for internal use—they needed a template-free way to customize configurations
Helm vs Kustomize
Section titled “Helm vs Kustomize”┌─────────────────────────────────────────────────────────────────┐│ HELM vs KUSTOMIZE │├─────────────────────────────────────────────────────────────────┤│ ││ HELM KUSTOMIZE ││ ──── ───────── ││ ││ Model: Packaging Model: Patching ││ • Chart = package • Base + overlays ││ • Templates + values • No templates ││ • Releases tracked • Pure YAML ││ ││ Good for: Good for: ││ • Third-party apps • Your own apps ││ • Complex applications • Environment variants ││ • Version management • Last-mile customization ││ • Sharing across teams • Patching Helm output ││ ││ Template syntax: Patch syntax: ││ {{ .Values.replicas }} - op: replace ││ path: /spec/replicas ││ value: 3 ││ ││ BEST PRACTICE: Use together! ││ Helm for packages → Kustomize for environment-specific patches ││ │└─────────────────────────────────────────────────────────────────┘A Note on Jsonnet
Section titled “A Note on Jsonnet”Beyond Helm and Kustomize, Jsonnet is a data templating language that some teams use to generate Kubernetes manifests. Grafana Labs uses Jsonnet extensively for their Kubernetes deployments, and you will find it referenced in the CGOA exam. Jsonnet treats configuration as programmable data rather than text templates — you write functions and objects that evaluate to JSON/YAML.
In practice, Jsonnet has a smaller community than Helm or Kustomize, and most organizations choose one of the two dominant tools. However, if you encounter a project using Jsonnet (or its Kubernetes-specific library Tanka), understand that it solves the same problem — reducing YAML duplication — with a different paradigm: a full programming language for configuration rather than templates (Helm) or patches (Kustomize).
Helm Fundamentals
Section titled “Helm Fundamentals”Chart Structure
Section titled “Chart Structure”my-app/├── Chart.yaml # Metadata├── values.yaml # Default values├── charts/ # Dependencies├── templates/│ ├── deployment.yaml│ ├── service.yaml│ ├── ingress.yaml│ ├── configmap.yaml│ ├── _helpers.tpl # Template helpers│ ├── NOTES.txt # Post-install notes│ └── tests/│ └── test-connection.yaml└── README.mdChart.yaml
Section titled “Chart.yaml”apiVersion: v2name: my-appdescription: A Helm chart for my applicationtype: application # or "library"version: 1.0.0 # Chart versionappVersion: "2.3.1" # App version
keywords: - app - web
home: https://github.com/org/my-appsources: - https://github.com/org/my-app
maintainers: - name: Platform Team email: platform@example.com
dependencies: - name: postgresql version: "12.x" repository: https://charts.bitnami.com/bitnami condition: postgresql.enabledvalues.yaml
Section titled “values.yaml”# values.yaml - defaultsreplicaCount: 1
image: repository: myapp tag: "latest" pullPolicy: IfNotPresent
service: type: ClusterIP port: 80
ingress: enabled: false className: nginx hosts: - host: myapp.local paths: - path: / pathType: Prefix
resources: limits: cpu: 100m memory: 128Mi requests: cpu: 50m memory: 64Mi
postgresql: enabled: true auth: database: myappTemplate Syntax
Section titled “Template Syntax”apiVersion: apps/v1kind: Deploymentmetadata: name: {{ include "my-app.fullname" . }} labels: {{- include "my-app.labels" . | nindent 4 }}spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: {{- include "my-app.selectorLabels" . | nindent 6 }} template: metadata: labels: {{- include "my-app.selectorLabels" . | nindent 8 }} spec: containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} ports: - name: http containerPort: 80 {{- if .Values.resources }} resources: {{- toYaml .Values.resources | nindent 12 }} {{- end }} {{- if .Values.env }} env: {{- range $key, $value := .Values.env }} - name: {{ $key }} value: {{ $value | quote }} {{- end }} {{- end }}Template Helpers
Section titled “Template Helpers”{{/*Expand the name of the chart.*/}}{{- define "my-app.name" -}}{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}{{- end }}
{{/*Create a default fully qualified app name.*/}}{{- define "my-app.fullname" -}}{{- if .Values.fullnameOverride }}{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}{{- else }}{{- $name := default .Chart.Name .Values.nameOverride }}{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}{{- end }}{{- end }}
{{/*Common labels*/}}{{- define "my-app.labels" -}}helm.sh/chart: {{ include "my-app.chart" . }}{{ include "my-app.selectorLabels" . }}app.kubernetes.io/managed-by: {{ .Release.Service }}{{- end }}
{{/*Selector labels*/}}{{- define "my-app.selectorLabels" -}}app.kubernetes.io/name: {{ include "my-app.name" . }}app.kubernetes.io/instance: {{ .Release.Name }}{{- end }}Helm Commands
Section titled “Helm Commands”# Create new charthelm create my-app
# Lint charthelm lint my-app/
# Template locally (dry-run)helm template my-release my-app/ -f values-prod.yaml
# Installhelm install my-release my-app/ \ --namespace production \ --create-namespace \ -f values-prod.yaml
# Upgradehelm upgrade my-release my-app/ \ --namespace production \ -f values-prod.yaml
# Rollbackhelm rollback my-release 1 --namespace production
# List releaseshelm list --all-namespaces
# Get release valueshelm get values my-release --namespace production
# Uninstallhelm uninstall my-release --namespace production
# Package charthelm package my-app/
# Push to OCI registryhelm push my-app-1.0.0.tgz oci://ghcr.io/org/chartsHelm Dependencies
Section titled “Helm Dependencies”# Update dependencieshelm dependency update my-app/
# Build dependencieshelm dependency build my-app/dependencies: - name: postgresql version: "12.1.0" repository: https://charts.bitnami.com/bitnami condition: postgresql.enabled tags: - database
- name: redis version: "17.x" repository: https://charts.bitnami.com/bitnami condition: redis.enabledpostgresql: enabled: true primary: persistence: size: 10Gi
redis: enabled: falseKustomize Fundamentals
Section titled “Kustomize Fundamentals”Directory Structure
Section titled “Directory Structure”my-app/├── base/│ ├── kustomization.yaml│ ├── deployment.yaml│ ├── service.yaml│ └── configmap.yaml└── overlays/ ├── development/ │ ├── kustomization.yaml │ └── replica-patch.yaml ├── staging/ │ ├── kustomization.yaml │ └── namespace.yaml └── production/ ├── kustomization.yaml ├── replica-patch.yaml └── ingress.yamlBase kustomization.yaml
Section titled “Base kustomization.yaml”apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
resources: - deployment.yaml - service.yaml - configmap.yaml
# Common labels for all resourcescommonLabels: app: my-app
# Common annotationscommonAnnotations: team: platformOverlay kustomization.yaml
Section titled “Overlay kustomization.yaml”apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
namespace: productionnamePrefix: prod-
resources: - ../../base - ingress.yaml
# Strategic merge patchespatches: - path: replica-patch.yaml
# Or inline patchespatches: - patch: |- - op: replace path: /spec/replicas value: 5 target: kind: Deployment name: my-app
# Image overridesimages: - name: myapp newName: myregistry/myapp newTag: v2.0.0
# ConfigMap/Secret generatorsconfigMapGenerator: - name: app-config literals: - ENVIRONMENT=production - LOG_LEVEL=info
secretGenerator: - name: app-secrets literals: - DATABASE_URL=postgres://prod-db:5432/app type: OpaquePatch Types
Section titled “Patch Types”# Strategic Merge Patch (default)apiVersion: apps/v1kind: Deploymentmetadata: name: my-appspec: replicas: 5
---# JSON Patch# kustomization.yamlpatches: - patch: |- - op: replace path: /spec/replicas value: 5 - op: add path: /metadata/labels/env value: production target: kind: Deployment name: my-app
---# Patch file with target# kustomization.yamlpatches: - path: increase-memory.yaml target: kind: Deployment labelSelector: "app=my-app"Components (Reusable Patches)
Section titled “Components (Reusable Patches)”apiVersion: kustomize.config.k8s.io/v1alpha1kind: Component
patches: - patch: |- - op: add path: /spec/template/metadata/annotations/prometheus.io~1scrape value: "true" - op: add path: /spec/template/metadata/annotations/prometheus.io~1port value: "8080" target: kind: Deployment
---# overlays/production/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
resources: - ../../base
components: - ../../components/monitoring - ../../components/securityReplacements (Variable Substitution)
Section titled “Replacements (Variable Substitution)”apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
resources: - deployment.yaml - configmap.yaml
replacements: - source: kind: ConfigMap name: app-config fieldPath: data.HOSTNAME targets: - select: kind: Deployment name: my-app fieldPaths: - spec.template.spec.containers.[name=app].env.[name=HOSTNAME].valueKustomize Commands
Section titled “Kustomize Commands”# Build (render YAML)kustomize build overlays/production
# Applykubectl apply -k overlays/production
# Preview diffkubectl diff -k overlays/production
# View resourceskustomize build overlays/production | kubectl get -f - -o nameHelm + Kustomize Together
Section titled “Helm + Kustomize Together”Pattern: Kustomize Wrapping Helm
Section titled “Pattern: Kustomize Wrapping Helm”my-deployment/├── base/│ ├── kustomization.yaml│ └── helmrelease.yaml # Flux HelmRelease or ArgoCD Application└── overlays/ ├── staging/ │ ├── kustomization.yaml │ └── values-patch.yaml └── production/ ├── kustomization.yaml └── values-patch.yamlArgoCD: Helm + Kustomize
Section titled “ArgoCD: Helm + Kustomize”# ArgoCD Application using bothapiVersion: argoproj.io/v1alpha1kind: Applicationmetadata: name: my-appspec: source: repoURL: https://charts.example.com chart: my-app targetRevision: 1.0.0
# Helm values helm: values: | replicaCount: 3
# Plus Kustomize patches kustomize: patches: - patch: |- - op: add path: /metadata/annotations value: custom.annotation: "true" target: kind: DeploymentFlux: Post-Rendering with Kustomize
Section titled “Flux: Post-Rendering with Kustomize”# HelmRelease with post-renderingapiVersion: helm.toolkit.fluxcd.io/v2kind: HelmReleasemetadata: name: my-appspec: chart: spec: chart: my-app sourceRef: kind: HelmRepository name: my-charts
values: replicaCount: 3
# Post-render with Kustomize postRenderers: - kustomize: patches: - patch: |- - op: add path: /metadata/labels/custom value: label target: kind: Deployment
images: - name: my-app newTag: v2.0.0-customUmbrella Chart Pattern
Section titled “Umbrella Chart Pattern”# Chart.yaml - umbrella chartapiVersion: v2name: platformversion: 1.0.0
dependencies: - name: cert-manager version: "1.13.0" repository: https://charts.jetstack.io condition: cert-manager.enabled
- name: ingress-nginx version: "4.8.0" repository: https://kubernetes.github.io/ingress-nginx condition: ingress-nginx.enabled
- name: prometheus version: "25.0.0" repository: https://prometheus-community.github.io/helm-charts condition: prometheus.enabledcert-manager: enabled: true installCRDs: true
ingress-nginx: enabled: true controller: replicaCount: 2
prometheus: enabled: true alertmanager: enabled: falseCommon Mistakes
Section titled “Common Mistakes”| Mistake | Why It’s Bad | Better Approach |
|---|---|---|
| Hardcoded values in templates | Can’t customize | Use {{ .Values.x }} with defaults |
| Deeply nested values | Hard to override | Keep values 2-3 levels deep max |
| No schema validation | Invalid values accepted | Use values.schema.json |
| Kustomize without base | Duplication across overlays | Always use base + overlays |
| Mixing patch types | Confusing, hard to debug | Pick one style per patch file |
| Over-templating | Unmaintainable | Use Kustomize for simple overrides |
War Story: The $1.8 Million Template Explosion
Section titled “War Story: The $1.8 Million Template Explosion”A healthcare SaaS company had a Helm chart that started simple—20 values, clean templates. Over three years, it grew into a monster: 847 lines of values.yaml, 50+ template variables, and conditional logic that would make a Turing machine weep.
The chart powered their core patient records system across 23 hospitals. Every deployment was a sweaty-palmed ordeal because nobody fully understood all the values.
# values.yaml (actual excerpt from the incident)encryption: enabled: {{ .Values.compliance.hipaa.enabled | default "false" }} algorithm: {{ .Values.encryption.algorithm | default "AES-256" }} keyRotation: enabled: {{ if and .Values.compliance.hipaa.enabled .Values.encryption.keyRotation.enabled }}true{{ else }}false{{ end }} intervalDays: {{ .Values.encryption.keyRotation.intervalDays | default 90 | int }} # 200 more lines of nested conditionals...Then came the incident.
THE TEMPLATE EXPLOSION TIMELINE─────────────────────────────────────────────────────────────────TUESDAY 2:00 PM Developer updates chart to add new featureTUESDAY 2:30 PM PR approved (nobody fully reviewed 800-line values.yaml)TUESDAY 3:00 PM Helm chart deployed to staging - worksTUESDAY 4:00 PM Production deployment beginsTUESDAY 4:01 PM Helm template renders successfullyTUESDAY 4:02 PM Pods start, but encryption is DISABLED (nested conditional evaluated wrong in prod)
TUESDAY 4:02 PM Patient data begins flowing WITHOUT encryption
WEDNESDAY 9:00 AM Security audit discovers unencrypted data in logsWEDNESDAY 9:30 AM Incident declared, HIPAA breach protocol activatedWEDNESDAY 10:00 AM System taken offline for remediationWEDNESDAY 6:00 PM Encryption re-enabled, data audit begins
NEXT 6 WEEKS Mandatory HIPAA breach investigationFinancial Impact:
INCIDENT COST BREAKDOWN─────────────────────────────────────────────────────────────────Downtime (8 hours × 23 hospitals): - Lost appointment revenue = $340,000 - Emergency staff overtime = $45,000
HIPAA Breach Response: - Mandatory patient notifications = $180,000 - External security audit = $250,000 - Legal review and documentation = $150,000 - Regulatory fine (Level 2 violation) = $500,000
Remediation: - Chart rewrite (2 engineers × 4 weeks) = $80,000 - Additional testing infrastructure = $25,000 - Mandatory staff training = $35,000
Reputation damage (estimated): - Contract delays from 3 hospitals = $200,000
TOTAL COST: $1,805,000─────────────────────────────────────────────────────────────────The Root Cause:
# The problematic conditional (simplified){{ if and .Values.compliance.hipaa.enabled .Values.encryption.keyRotation.enabled }}
# In staging values-staging.yaml:compliance: hipaa: enabled: true # Explicitencryption: keyRotation: enabled: true # Explicit
# In production values-prod.yaml:compliance: hipaa: enabled: true # ✓ Set# encryption.keyRotation.enabled was MISSING# Default was supposed to be "true" but Go template defaulted to falseThe Fix—Simplified Chart + Kustomize:
# NEW values.yaml (20 values, not 847)replicaCount: 1image: repository: patient-records tag: latestresources: limits: memory: 2Gi cpu: "1"
# Encryption is ALWAYS enabled, not configurable# HIPAA compliance is the law, not an option# Environment differences via KustomizeapiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
resources: - ../../base
patches: - patch: |- - op: replace path: /spec/replicas value: 5 target: kind: Deployment
images: - name: patient-records newTag: v2.3.1-prod
configMapGenerator: - name: env-config literals: - ENVIRONMENT=production - LOG_LEVEL=warnLessons Learned:
- Don’t template security settings—encryption should be always-on, not a flag
- Template what varies between releases, not between environments—use Kustomize for environment differences
- If values.yaml exceeds 100 lines, you’re probably over-templating
- Test with production values in CI—the staging/prod divergence was the root cause
- Mandatory schema validation—
values.schema.jsonwould have caught the missing value
Question 1
Section titled “Question 1”When would you use Helm over Kustomize, and vice versa?
Show Answer
Use Helm when:
- Installing third-party applications (nginx-ingress, prometheus, etc.)
- Packaging complex applications with many configuration options
- You need version management and rollback
- Sharing applications across teams or organizations
- Application has complex conditional logic
Use Kustomize when:
- Customizing your own applications for different environments
- Patching third-party Helm charts with minor changes
- You want template-free, pure YAML
- Making last-mile customizations
- Simple overlay patterns (dev/staging/prod)
Best practice: Use both! Helm for packaging, Kustomize for environment customization.
Question 2
Section titled “Question 2”What’s wrong with this Helm template?
containers: - name: app image: myapp:{{ .Values.image.tag }} env: {{- range .Values.env }} - name: {{ .name }} value: {{ .value }} {{- end }}Show Answer
Two issues:
-
Missing quote function for tag: If
tagis a number like1.0, YAML will interpret it as a float. Use{{ .Values.image.tag | quote }}or"{{ .Values.image.tag }}". -
Values not quoted: The
valuefield should be quoted in case it contains special characters.
Fixed:
containers: - name: app image: "myapp:{{ .Values.image.tag }}" env: {{- range .Values.env }} - name: {{ .name | quote }} value: {{ .value | quote }} {{- end }}Question 3
Section titled “Question 3”Write a Kustomize patch that adds a sidecar container to all Deployments in the overlay.
Show Answer
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
resources: - ../../base
patches: - patch: |- - op: add path: /spec/template/spec/containers/- value: name: sidecar image: fluentd:latest resources: limits: memory: 100Mi cpu: 50m target: kind: Deployment
# Or using strategic merge patch file:# sidecar-patch.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: not-used # Will be overwritten by target selectorspec: template: spec: containers: - name: sidecar image: fluentd:latestThe JSON Patch with /- adds to the end of the containers array.
Question 4
Section titled “Question 4”How do you pass values to a Helm subchart (dependency)?
Show Answer
In the parent chart’s values.yaml, nest values under the subchart name:
# Parent values.yamlreplicaCount: 3
# Values for postgresql subchartpostgresql: auth: database: myapp username: myuser primary: persistence: size: 20Gi
# Values for redis subchartredis: architecture: standalone master: persistence: enabled: falseThe subchart name must match the name field in Chart.yaml dependencies. Helm automatically passes the nested values to the subchart.
You can also use --set postgresql.auth.database=myapp on the command line.
Question 5
Section titled “Question 5”Your team has 12 microservices, each with dev/staging/prod environments. Calculate the YAML file count for: (A) copy-paste approach, (B) Helm-only, (C) Kustomize base+overlays. Which approach would you recommend?
Show Answer
Calculation:
YAML FILE COUNT COMPARISON─────────────────────────────────────────────────────────────────APPROACH A: Copy-Paste - 12 services × 3 environments × 5 files each = 180 YAML files - Duplication: 100% - Drift risk: EXTREMELY HIGH
APPROACH B: Helm-Only - 12 services × 1 chart each = 12 charts - Each chart: ~8 files (Chart.yaml, values.yaml, 5 templates, helpers) - Plus 3 values files per service (dev/staging/prod) - Total: 12 × 8 + 12 × 3 = 132 files - Duplication: LOW (values files have some overlap) - Drift risk: MEDIUM (values files can diverge)
APPROACH C: Kustomize Base+Overlays - 12 services × 1 base = 12 bases - Each base: 5 files (kustomization + 4 manifests) - 3 overlays per service: 12 × 3 × 2 = 72 overlay files - Total: 60 + 72 = 132 files - Duplication: VERY LOW (overlays only contain differences) - Drift risk: LOW (base is single source of truth)
RECOMMENDED APPROACH D: Helm + Kustomize───────────────────────────────────────────────────────────────── - 12 services × 1 chart = 12 charts (~96 chart files) - 1 Kustomize base per service = 12 × 1 file (generated from Helm) - 3 overlays per service = 36 kustomization.yaml files - Total: ~144 files - Duplication: MINIMAL - Drift risk: LOWEST (Helm for packaging, Kustomize for environment)Recommendation: Approach D (Helm + Kustomize combined)
# Pattern: Helm generates base, Kustomize patches per environmentresources: - all.yaml # Generated via: helm template my-service ./chart > all.yaml
# deploy/overlays/production/kustomization.yamlresources: - ../../basepatches: - target: kind: Deployment patch: |- - op: replace path: /spec/replicas value: 5images: - name: my-service newTag: v2.1.0-prodFor 12 services × 3 environments, this gives you:
- Single source of truth per service (Helm chart)
- Minimal environment-specific files (just patches)
- Clear separation of concerns
Question 6
Section titled “Question 6”Write a values.schema.json that validates: image.tag must be semver format, replicaCount must be 1-100, resources.limits.memory must be set.
Show Answer
{ "$schema": "https://json-schema.org/draft-07/schema#", "type": "object", "required": ["image", "replicaCount", "resources"], "properties": { "image": { "type": "object", "required": ["repository", "tag"], "properties": { "repository": { "type": "string", "minLength": 1 }, "tag": { "type": "string", "pattern": "^v?[0-9]+\\.[0-9]+\\.[0-9]+(-[a-zA-Z0-9]+)?$", "description": "Semver format: v1.2.3 or 1.2.3 or 1.2.3-alpha" }, "pullPolicy": { "type": "string", "enum": ["Always", "IfNotPresent", "Never"] } } }, "replicaCount": { "type": "integer", "minimum": 1, "maximum": 100, "description": "Number of pod replicas (1-100)" }, "resources": { "type": "object", "required": ["limits"], "properties": { "limits": { "type": "object", "required": ["memory"], "properties": { "memory": { "type": "string", "pattern": "^[0-9]+(Mi|Gi)$", "description": "Memory limit (e.g., 128Mi, 2Gi)" }, "cpu": { "type": "string", "pattern": "^[0-9]+(m)?$", "description": "CPU limit (e.g., 100m, 1)" } } }, "requests": { "type": "object", "properties": { "memory": { "type": "string" }, "cpu": { "type": "string" } } } } } }}Usage:
# Helm validates against schema automaticallyhelm install my-app ./chart -f values.yaml
# Example validation errors:# - "image.tag: Does not match pattern '^v?[0-9]+...' (got: 'latest')"# - "replicaCount: Must be <= 100 (got: 150)"# - "resources.limits: 'memory' is required"Why this matters: Schema validation catches configuration errors at helm template time, not runtime. The healthcare incident in the war story would have been caught immediately.
Question 7
Section titled “Question 7”You need to add the same set of labels and annotations to ALL resources across 8 microservices. Compare implementing this with Helm _helpers.tpl vs Kustomize commonLabels. Which is better?
Show Answer
Helm Approach (_helpers.tpl):
{{- define "common.labels" -}}app.kubernetes.io/name: {{ .Chart.Name }}app.kubernetes.io/instance: {{ .Release.Name }}app.kubernetes.io/version: {{ .Chart.AppVersion }}app.kubernetes.io/managed-by: {{ .Release.Service }}team: platformcost-center: engineeringenvironment: {{ .Values.environment }}{{- end }}
# templates/deployment.yamlmetadata: labels: {{- include "common.labels" . | nindent 4 }}
# templates/service.yaml (must repeat)metadata: labels: {{- include "common.labels" . | nindent 4 }}Kustomize Approach:
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
commonLabels: team: platform cost-center: engineering
commonAnnotations: prometheus.io/scrape: "true"
resources: - deployment.yaml - service.yaml - configmap.yamlComparison:
| Aspect | Helm _helpers.tpl | Kustomize commonLabels |
|---|---|---|
| Where applied | Must include in each template | Automatically all resources |
| Selector labels | Can control which go to selectors | Adds to ALL selectors (⚠️) |
| Flexibility | Full Go templating power | Simple key-value only |
| Maintenance | Update in one place | Update in one place |
| Learning curve | Must understand Go templates | Simple YAML |
| Risk | Forgetting to include | Selector mismatch on update |
The catch with Kustomize commonLabels:
# WARNING: commonLabels adds to ALL selectors, including:# - Deployment.spec.selector.matchLabels# - Service.spec.selector
# If you ADD a new commonLabel after deployment, the selectors change,# and Kubernetes rejects the update (selector is immutable)
# Safer approach: Use labels transformertransformers: - |- apiVersion: builtin kind: LabelTransformer metadata: name: add-labels labels: team: platform fieldSpecs: - path: metadata/labels create: true # Explicitly exclude selectorsRecommendation:
- For new projects: Kustomize
commonLabelsis simpler - For existing deployments: Use Helm helpers or labels transformer to avoid selector changes
- For 8 microservices: Create a shared Helm library chart with common helpers, or a Kustomize component
apiVersion: kustomize.config.k8s.io/v1alpha1kind: Component
labels: - pairs: team: platform cost-center: engineering includeSelectors: false # ← Safe for existing deploymentsQuestion 8
Section titled “Question 8”Your Helm release shows STATUS: deployed but the pods are in CrashLoopBackOff. You run helm upgrade with a fix but get “no changes”. What’s happening and how do you fix it?
Show Answer
The Problem:
Helm tracks releases based on the rendered manifests, not pod status. If your values haven’t changed, Helm sees no diff and skips the upgrade—even if pods are crashing.
Common scenarios:
- Bug is in the application code, not the chart
- ConfigMap/Secret content is the same (even if mounted file has issues)
- Environment variable references an external resource that failed
Investigation:
# Check what Helm thinks is deployedhelm get manifest my-release | head -50
# Compare to what you're trying to deployhelm template my-release ./chart -f values.yaml | head -50
# Check actual pod statuskubectl get pods -l app.kubernetes.io/instance=my-releasekubectl describe pod <crashing-pod>kubectl logs <crashing-pod> --previousSolutions:
1. Force resource update with annotation:
podAnnotations: rollme: {{ randAlphaNum 5 | quote }} # Forces new deployment
# Or use --sethelm upgrade my-release ./chart --set podAnnotations.restartedAt=$(date +%s)2. Use helm upgrade —force:
# WARNING: This deletes and recreates resourceshelm upgrade my-release ./chart --force3. Trigger via ConfigMap hash:
spec: template: metadata: annotations: checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}When ConfigMap changes, deployment rolls automatically.
4. If the fix is in application code (new image):
# Image tag changed from v1.0.0 to v1.0.1helm upgrade my-release ./chart --set image.tag=v1.0.1
# Or if using 'latest' tag (not recommended):kubectl rollout restart deployment/my-releaseRoot Cause Analysis:
WHY HELM SHOWS "NO CHANGES"─────────────────────────────────────────────────────────────────Helm compares: Current release manifest (stored in Secret) vs New rendered manifest
If identical → "no changes detected"
Helm does NOT check: - Pod status (Running, CrashLoopBackOff) - Container logs - Actual cluster state
This is by design—Helm is declarative about DESIRED state,not CURRENT state.Best Practice:
Always change something when deploying a fix:
- Bump image tag (even for same code rebuild)
- Use image digest instead of tag
- Add a
deployedAtannotation to force rollout
Hands-On Exercise
Section titled “Hands-On Exercise”Scenario: Multi-Environment Application
Section titled “Scenario: Multi-Environment Application”Create a Helm chart with Kustomize overlays for dev, staging, and production.
Create Helm Chart
Section titled “Create Helm Chart”# Create charthelm create my-appcd my-app
# Simplify values.yamlcat > values.yaml << 'EOF'replicaCount: 1
image: repository: nginx tag: "1.25" pullPolicy: IfNotPresent
service: type: ClusterIP port: 80
resources: limits: cpu: 100m memory: 128Mi requests: cpu: 50m memory: 64Mi
env: []EOFCreate Kustomize Structure
Section titled “Create Kustomize Structure”cd ..mkdir -p kustomize/{base,overlays/{dev,staging,production}}
# Generate base from Helmhelm template my-app ./my-app > kustomize/base/all.yaml
# Create base kustomizationcat > kustomize/base/kustomization.yaml << 'EOF'apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationresources: - all.yamlEOF
# Dev overlaycat > kustomize/overlays/dev/kustomization.yaml << 'EOF'apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationnamespace: devnamePrefix: dev-resources: - ../../basepatches: - patch: |- - op: replace path: /spec/replicas value: 1 target: kind: DeploymentEOF
# Production overlaycat > kustomize/overlays/production/kustomization.yaml << 'EOF'apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationnamespace: productionnamePrefix: prod-resources: - ../../basepatches: - patch: |- - op: replace path: /spec/replicas value: 5 - op: replace path: /spec/template/spec/containers/0/resources/limits/memory value: 512Mi target: kind: Deploymentimages: - name: nginx newTag: "1.25-alpine"EOFBuild and Compare
Section titled “Build and Compare”# Build devkustomize build kustomize/overlays/dev
# Build productionkustomize build kustomize/overlays/production
# Comparediff <(kustomize build kustomize/overlays/dev) \ <(kustomize build kustomize/overlays/production)Apply to Cluster
Section titled “Apply to Cluster”# Create namespaceskubectl create namespace devkubectl create namespace production
# Applykubectl apply -k kustomize/overlays/devkubectl apply -k kustomize/overlays/production
# Verifykubectl get pods -n devkubectl get pods -n productionSuccess Criteria
Section titled “Success Criteria”- Helm chart renders correctly
- Kustomize overlays modify base
- Dev has 1 replica, production has 5
- Production uses alpine image tag
- Can apply to different namespaces
Cleanup
Section titled “Cleanup”kubectl delete -k kustomize/overlays/devkubectl delete -k kustomize/overlays/productionkubectl delete namespace dev productionrm -rf my-app kustomizeKey Takeaways
Section titled “Key Takeaways”Before moving on, ensure you can:
- Explain when to use Helm (packaging, third-party apps) vs Kustomize (environment overlays)
- Create a Helm chart with Chart.yaml, values.yaml, and templates
- Use Helm template functions:
{{ .Values.x }},include,toYaml,nindent - Write
_helpers.tplfor reusable template definitions - Manage Helm dependencies in Chart.yaml with conditions
- Create Kustomize base + overlays structure for multiple environments
- Use strategic merge patches and JSON patches for modifications
- Generate ConfigMaps and Secrets with Kustomize generators
- Combine Helm + Kustomize using post-renderers or base generation
- Validate Helm values with
values.schema.jsonto catch errors early
Summary
Section titled “Summary”You’ve completed the GitOps & Deployments Toolkit! You now understand:
- ArgoCD: Application-centric GitOps with UI
- Argo Rollouts: Progressive delivery (canary, blue-green)
- Flux: Toolkit-based GitOps with image automation
- Helm & Kustomize: Package management and overlays
These tools form the foundation of modern Kubernetes deployment practices.
Next Steps
Section titled “Next Steps”Continue to CI/CD Pipelines Toolkit where we’ll explore Dagger, Tekton, and Argo Workflows for building before deploying.
“The best config is the one you understand. The second best is the one that works. Helm and Kustomize help you get both.”