Skip to content

Module 4.3: StorageClasses & Dynamic Provisioning

Hands-On Lab Available
K8s Cluster intermediate 35 min
Launch Lab ↗

Opens in Killercoda in a new tab

Complexity: [MEDIUM] - Automation of storage provisioning

Time to Complete: 35-45 minutes

Prerequisites: Module 4.2 (PV & PVC), Module 1.2 (CSI)


After this module, you will be able to:

  • Create StorageClasses for dynamic provisioning with cloud and local provisioners
  • Configure binding modes (Immediate vs WaitForFirstConsumer) and explain when each is appropriate
  • Implement volume expansion on existing PVCs and explain the requirements
  • Debug dynamic provisioning failures by checking StorageClass, provisioner pods, and events

In Module 4.2, you manually created PersistentVolumes before creating PersistentVolumeClaims. This doesn’t scale - imagine an admin creating hundreds of PVs for every storage request! StorageClasses enable dynamic provisioning: create a PVC, and Kubernetes automatically provisions the underlying storage. The CKA exam tests both understanding StorageClasses and configuring dynamic provisioning.

The Vending Machine Analogy

Think of static provisioning like ordering custom furniture - someone has to build it before you can use it. Dynamic provisioning is like a vending machine: you select what you want (StorageClass), insert your request (PVC), and out comes your storage (PV). The StorageClass is the vending machine - it knows how to produce different types of storage on demand.


By the end of this module, you’ll be able to:

  • Understand how StorageClasses enable dynamic provisioning
  • Create and configure StorageClasses
  • Set a default StorageClass for the cluster
  • Use parameters to customize provisioned storage
  • Understand volume binding modes
  • Troubleshoot dynamic provisioning issues

  • Cloud clusters have defaults: GKE, EKS, and AKS all come with pre-configured default StorageClasses that provision cloud-native storage
  • kind/minikube have provisioners too: Even local clusters include dynamic provisioners (rancher.io/local-path for kind, k8s.io/minikube-hostpath for minikube)
  • StorageClasses are immutable: Once created, you can’t change a StorageClass - you must delete and recreate it

┌──────────────────────────────────────────────────────────────────────┐
│ Static vs Dynamic Provisioning │
│ │
│ STATIC (Manual) DYNAMIC (Automatic) │
│ ─────────────── ──────────────────── │
│ │
│ 1. Admin creates PV 1. Admin creates StorageClass │
│ │ │ │
│ ▼ ▼ │
│ 2. Dev creates PVC 2. Dev creates PVC │
│ │ │ │
│ ▼ ▼ │
│ 3. Kubernetes binds 3. Provisioner creates PV │
│ PVC to existing PV │ │
│ ▼ │
│ 4. Kubernetes binds PVC to new PV │
│ │
│ Pro: Full control Pro: Self-service, scalable │
│ Con: Admin bottleneck Con: Less control per volume │
└──────────────────────────────────────────────────────────────────────┘
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
annotations:
storageclass.kubernetes.io/is-default-class: "true" # Optional
provisioner: kubernetes.io/aws-ebs # Who creates the storage
parameters: # Provisioner-specific settings
type: gp3
iopsPerGB: "10"
reclaimPolicy: Delete # What happens when PVC deleted
volumeBindingMode: WaitForFirstConsumer # When to provision
allowVolumeExpansion: true # Can resize later?
mountOptions: # Mount options for volumes
- debug
ProvisionerCloud/PlatformStorage Type
kubernetes.io/aws-ebsAWSEBS volumes
kubernetes.io/gce-pdGCPPersistent Disk
kubernetes.io/azure-diskAzureManaged Disk
kubernetes.io/azure-fileAzureAzure Files
ebs.csi.aws.comAWS (CSI)EBS via CSI
pd.csi.storage.gke.ioGCP (CSI)PD via CSI
rancher.io/local-pathkindLocal path
k8s.io/minikube-hostpathminikubeHost path

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ebs
provisioner: ebs.csi.aws.com
parameters:
type: gp3
iopsPerGB: "50"
throughput: "125"
encrypted: "true"
kmsKeyId: "arn:aws:kms:us-east-1:123456789:key/abc-123"
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: pd.csi.storage.gke.io
parameters:
type: pd-ssd
replication-type: regional-pd # For HA
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-premium
provisioner: disk.csi.azure.com
parameters:
storageaccounttype: Premium_LRS
kind: Managed
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

kind (uses local-path-provisioner):

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

minikube:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: k8s.io/minikube-hostpath
reclaimPolicy: Delete
volumeBindingMode: Immediate

Only one StorageClass should be default. Mark it with an annotation:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
annotations:
storageclass.kubernetes.io/is-default-class: "true" # The magic annotation
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp3

Or patch an existing one:

Terminal window
k patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
Terminal window
k get sc
# NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE
# standard (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer
# fast-ssd kubernetes.io/aws-ebs Delete Immediate

When a PVC doesn’t specify storageClassName:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
# No storageClassName specified - uses default!

Behavior:

  • If default StorageClass exists → Uses default, triggers dynamic provisioning
  • If no default exists → PVC stays Pending until matching PV appears

To explicitly avoid dynamic provisioning:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: static-only-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: "" # Empty string = no dynamic provisioning

┌──────────────────────────────────────────────────────────────────────┐
│ Volume Binding Modes │
│ │
│ IMMEDIATE WAITFORFIRSTCONSUMER │
│ ───────── ──────────────────── │
│ │
│ PVC Created PVC Created │
│ │ │ │
│ ▼ ▼ │
│ PV Provisioned PVC stays Pending │
│ immediately │ │
│ │ │ │
│ │ Pod scheduled │
│ │ │ │
│ │ ▼ │
│ │ PV Provisioned │
│ │ (on same zone as pod) │
│ │ │ │
│ ▼ ▼ │
│ Pod scheduled Pod can use storage │
│ (may fail if wrong zone!) │
│ │
└──────────────────────────────────────────────────────────────────────┘

Problem with Immediate:

Node: us-east-1a Node: us-east-1b
┌─────────────┐ ┌─────────────┐
│ │ │ Pod │ ← Scheduler puts pod here
│ │ │ (needs │
│ │ │ storage) │
└─────────────┘ └─────────────┘
EBS Volume ✗ Volume in wrong zone!
(provisioned ✗ Pod can't start!
immediately in 1a)

Solution with WaitForFirstConsumer:

Node: us-east-1a Node: us-east-1b
┌─────────────┐ ┌─────────────┐
│ │ │ Pod │ ← Scheduler puts pod here
│ │ │ (needs │
│ │ │ storage) │
└─────────────┘ └─────────────┘
EBS Volume ✓ Volume in correct zone!
(provisioned ✓ Pod starts successfully!
in 1b AFTER
pod scheduled)
ModeUse Case
ImmediateNFS, distributed storage, zone-less storage
WaitForFirstConsumerZone-specific storage (EBS, GCE PD, Azure Disk), local storage

Pause and predict: You have a StorageClass with volumeBindingMode: Immediate for AWS EBS. A developer creates a PVC, and a PV is immediately provisioned in us-east-1a. The scheduler then places the pod on a node in us-east-1b. What happens when the pod tries to start? How would changing the binding mode prevent this?


apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: expandable
provisioner: kubernetes.io/aws-ebs
allowVolumeExpansion: true # Must be true to resize PVCs
parameters:
type: gp3
Terminal window
# Original PVC with 10Gi
k get pvc my-claim
# NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
# my-claim Bound pv-001 10Gi RWO expandable
# Edit to request more space
k patch pvc my-claim -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'
# Or edit manually
k edit pvc my-claim
# Change spec.resources.requests.storage to 20Gi
┌─────────────────────────────────────────────────────────────────────┐
│ PVC Expansion Process │
│ │
│ 1. Edit PVC ──► 2. Controller resizes ──► 3. Filesystem │
│ (increase underlying storage expansion │
│ size) (e.g., EBS volume) (when mounted) │
│ │
│ Status shows: │
│ - "Resizing" - storage backend being resized │
│ - "FileSystemResizePending" - waiting for pod to mount │
│ │
│ ⚠️ Note: Expansion requires pod restart for some provisioners │
└─────────────────────────────────────────────────────────────────────┘
Terminal window
k describe pvc my-claim
# Look for conditions:
# Conditions:
# Type Status
# ---- ------
# FileSystemResizePending True # Waiting for filesystem resize
# Resizing True # Backend resize in progress

Important: You can only increase PVC size. Shrinking is not supported!

Stop and think: A PVC was created with a StorageClass that has allowVolumeExpansion: false. The database is running out of space. Can you change the StorageClass to allowVolumeExpansion: true and then expand the PVC? Or do you need to recreate the PVC? What would your recovery strategy be?


Parameters are provisioner-specific. Common examples:

AWS EBS (CSI):

parameters:
type: gp3 # gp2, gp3, io1, io2, st1, sc1
iopsPerGB: "50" # For gp3/io1/io2
throughput: "250" # For gp3 (MiB/s)
encrypted: "true"
kmsKeyId: "arn:aws:kms:..."
fsType: ext4 # ext4, xfs

GCP PD (CSI):

parameters:
type: pd-ssd # pd-standard, pd-ssd, pd-balanced
replication-type: none # none, regional-pd
disk-encryption-kms-key: "projects/..."
fsType: ext4

Azure Disk (CSI):

parameters:
storageaccounttype: Premium_LRS # Standard_LRS, Premium_LRS, StandardSSD_LRS
kind: Managed # Managed, Dedicated, Shared
cachingMode: ReadOnly
fsType: ext4
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: with-mount-options
provisioner: kubernetes.io/aws-ebs
mountOptions:
- debug
- noatime
- nodiratime
parameters:
type: gp3
fsType: ext4

MistakeProblemSolution
Multiple default StorageClassesUnpredictable behaviorOnly one should be default
Wrong provisioner for platformPVC stays Pending foreverUse correct provisioner for your cloud
Immediate mode with zonal storagePods can’t mount volumesUse WaitForFirstConsumer
Forgetting allowVolumeExpansionCan’t resize PVCs laterAlways set true unless intentional
Wrong parameters for provisionerProvisioning failsCheck provisioner documentation
Trying to shrink PVCNot supportedOnly expansion works

Pause and predict: Your cluster has two StorageClasses both annotated with storageclass.kubernetes.io/is-default-class: "true". A developer creates a PVC without specifying a storageClassName. What happens? Which StorageClass is used?


A developer creates a PVC in a cluster that has a default StorageClass called gp3-standard. The developer intended to use a manually created PV, so they created the PVC without specifying storageClassName. Instead of binding to the manual PV, a new 10Gi EBS volume appears in AWS. The developer is confused and asks why Kubernetes ignored the pre-created PV. What happened, and how should the PVC be configured to bind to the manual PV instead?

Answer

When storageClassName is omitted from a PVC, Kubernetes uses the default StorageClass (gp3-standard), which triggers dynamic provisioning — it creates a brand new PV via the EBS CSI driver instead of looking for existing manual PVs. To bind to a manually created PV, the developer must explicitly set storageClassName: "" (empty string) on the PVC. This tells Kubernetes to skip dynamic provisioning entirely and only look for PVs that also have no StorageClass. The manual PV must also have storageClassName: "" for the match to work. This is one of the most common misunderstandings about StorageClasses.

A team in a multi-AZ AWS cluster uses a StorageClass with volumeBindingMode: Immediate. A developer creates a PVC, and a PV backed by an EBS volume is provisioned in us-east-1a. Later, the scheduler places the pod on a node in us-east-1c. The pod is stuck in ContainerCreating with an attach error. What caused the mismatch, what binding mode should have been used, and why does this problem not affect NFS-backed StorageClasses?

Answer

With Immediate binding mode, the PV is provisioned as soon as the PVC is created, before the scheduler decides where to place the pod. EBS volumes are zone-specific — a volume in us-east-1a cannot be attached to a node in us-east-1c. The fix is to use volumeBindingMode: WaitForFirstConsumer, which delays provisioning until the pod is scheduled. The provisioner then creates the EBS volume in the same AZ as the scheduled node. NFS is not affected because NFS is network storage accessible from any node in any zone — it has no zone affinity, so Immediate binding works fine.

A production database is running out of disk space. The PVC uses a StorageClass with allowVolumeExpansion: true. The admin patches the PVC to increase from 50Gi to 100Gi. After 10 minutes, kubectl get pvc still shows 50Gi capacity, but kubectl describe pvc shows a condition FileSystemResizePending. The admin panics. Is something broken? What needs to happen next?

Answer

Nothing is broken — this is the expected two-phase expansion process. Phase 1 (backend resize) already completed: the underlying cloud disk was resized to 100Gi. Phase 2 (filesystem resize) is pending because the filesystem inside the volume needs to be grown, which for many storage backends requires the volume to be mounted by a pod. If the pod is running, the kubelet will expand the filesystem on the next mount. If the pod is not running, you may need to start a pod that mounts the PVC. After the filesystem resize completes, the FileSystemResizePending condition clears and kubectl get pvc will show 100Gi. Some CSI drivers support online expansion (no pod restart needed), while others require a pod restart.

An admin accidentally marks two StorageClasses as default: gp3-fast and standard-hdd. A developer creates a PVC without specifying a storageClassName. What happens, and how should the admin fix this?

Answer

With multiple default StorageClasses, the behavior is unpredictable — the admission controller may select either one, or in some Kubernetes versions, the PVC creation may fail or a warning is emitted. The Kubernetes documentation explicitly states only one StorageClass should be marked as default. The admin should fix this by removing the default annotation from one of the StorageClasses: kubectl patch sc standard-hdd -p '{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'. Best practice is to verify with kubectl get sc that exactly one StorageClass shows (default).

After creating a StorageClass with reclaimPolicy: Delete, the admin realizes it should be Retain for production use. Existing PVCs have already provisioned PVs using this StorageClass. Can the admin change the reclaimPolicy on the StorageClass? What happens to the already-provisioned PVs?

Answer

StorageClasses are immutable once created — you cannot change fields like reclaimPolicy or parameters. The admin must delete and recreate the StorageClass with the corrected policy. However, already-provisioned PVs keep their original reclaim policy — changing the StorageClass does not retroactively update existing PVs. To protect existing production data, the admin should manually patch each PV’s reclaim policy: kubectl patch pv <pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'. New PVCs will use the recreated StorageClass with the correct Retain policy, but every existing PV must be patched individually.

You are designing StorageClasses for a cluster used by both dev and production teams. Dev needs cheap, ephemeral storage. Production needs encrypted, high-IOPS storage with data protection. Design the two StorageClasses and explain your choice of reclaimPolicy, volumeBindingMode, and whether to enable volume expansion for each.

Answer

For dev: Use reclaimPolicy: Delete (auto-cleanup when PVCs are deleted, no orphaned volumes), volumeBindingMode: WaitForFirstConsumer (avoid zone mismatch), allowVolumeExpansion: true (devs may need to grow storage during experimentation), and cheap storage parameters (e.g., type: gp3 with default IOPS). For production: Use reclaimPolicy: Retain (protect data even if PVC is accidentally deleted), volumeBindingMode: WaitForFirstConsumer (same zone rationale), allowVolumeExpansion: true (databases grow over time), and add encrypted: "true" plus a KMS key in parameters for encryption at rest. The Delete policy for dev prevents storage cost leaks from forgotten PVCs, while Retain for production ensures a human must explicitly approve data deletion.


Set up a StorageClass and verify dynamic provisioning works correctly.

You need a cluster with a working storage provisioner. Kind and minikube have built-in provisioners.

Terminal window
# See what's available
k get sc
# Check if there's a default
k get sc -o custom-columns='NAME:.metadata.name,PROVISIONER:.provisioner,DEFAULT:.metadata.annotations.storageclass\.kubernetes\.io/is-default-class'
Terminal window
cat <<EOF | k apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: rancher.io/local-path # For kind; change for your cluster
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
EOF
Terminal window
cat <<EOF | k apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dynamic-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: fast
EOF
# Check status - should be Pending (waiting for consumer)
k get pvc dynamic-pvc

Task 4: Create Pod to Trigger Provisioning

Section titled “Task 4: Create Pod to Trigger Provisioning”
Terminal window
cat <<EOF | k apply -f -
apiVersion: v1
kind: Pod
metadata:
name: dynamic-pod
spec:
containers:
- name: app
image: busybox:1.36
command: ['sh', '-c', 'echo "Dynamically provisioned!" > /data/message; sleep 3600']
volumeMounts:
- name: storage
mountPath: /data
volumes:
- name: storage
persistentVolumeClaim:
claimName: dynamic-pvc
EOF
Terminal window
# PVC should now be Bound
k get pvc dynamic-pvc
# STATUS: Bound
# A PV was automatically created
k get pv
# Should see a dynamically named PV like pvc-xxxxx
# Check the PV details
k get pv -o jsonpath='{.items[0].spec.storageClassName}'
# Should show: fast
# Verify pod is running
k exec dynamic-pod -- cat /data/message

Task 6: Test Default StorageClass (Optional)

Section titled “Task 6: Test Default StorageClass (Optional)”
Terminal window
# Make our StorageClass the default
k patch sc fast -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
# Create PVC without storageClassName
cat <<EOF | k apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: default-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
# No storageClassName - uses default!
EOF
# Check it uses the default class
k get pvc default-pvc -o jsonpath='{.spec.storageClassName}'
# Should show: fast
  • StorageClass created successfully
  • PVC stays Pending until pod created (WaitForFirstConsumer)
  • PV automatically created when pod scheduled
  • Pod can write to dynamically provisioned storage
  • Understand the link between SC → PVC → PV
Terminal window
k delete pod dynamic-pod
k delete pvc dynamic-pvc default-pvc
k delete sc fast

Terminal window
# Task: List all StorageClasses and identify the default
k get sc

Drill 2: Create Basic StorageClass (2 min)

Section titled “Drill 2: Create Basic StorageClass (2 min)”
Terminal window
# Task: Create StorageClass "slow" with provisioner rancher.io/local-path
# reclaimPolicy: Retain
Terminal window
# Task: Make StorageClass "standard" the default
# Use annotation: storageclass.kubernetes.io/is-default-class: "true"

Drill 4: PVC with Specific StorageClass (2 min)

Section titled “Drill 4: PVC with Specific StorageClass (2 min)”
Terminal window
# Task: Create PVC "data-pvc" requesting 5Gi with StorageClass "fast"

Drill 5: PVC Without Dynamic Provisioning (2 min)

Section titled “Drill 5: PVC Without Dynamic Provisioning (2 min)”
Terminal window
# Task: Create PVC that won't use any StorageClass
# Hint: storageClassName: ""
Terminal window
# Task: Diagnose why a PVC is stuck in Pending
k describe pvc <name>
# Check Events section for errors
Terminal window
# Task: Create StorageClass with volume expansion enabled
# Key field: allowVolumeExpansion: true
Terminal window
# Task: Check the volumeBindingMode of StorageClass "standard"
k get sc standard -o jsonpath='{.volumeBindingMode}'

Continue to Module 4.4: Volume Snapshots & Cloning to learn about backup and data protection features.