Skip to content

Module 4.3: Secrets Management

Hands-On Lab Available
K8s Cluster advanced 40 min
Launch Lab ↗

Opens in Killercoda in a new tab

Complexity: [MEDIUM] - Critical CKS skill

Time to Complete: 45-50 minutes

Prerequisites: Module 4.2 (Pod Security Admission), RBAC basics


After completing this module, you will be able to:

  1. Configure etcd encryption at rest for Kubernetes Secrets
  2. Implement external secrets management using Vault or cloud provider secret stores
  3. Audit RBAC permissions to identify overly broad access to Secret resources
  4. Design a secrets management strategy that eliminates base64-only storage risks

Kubernetes Secrets store sensitive data like passwords, API keys, and certificates. By default, they’re only base64-encoded (not encrypted!) and accessible to anyone with RBAC permissions. Proper secrets management prevents credential leaks and privilege escalation.

CKS heavily tests secrets security practices.


┌─────────────────────────────────────────────────────────────┐
│ DEFAULT SECRETS SECURITY │
├─────────────────────────────────────────────────────────────┤
│ │
│ ⚠️ Base64 is NOT encryption! │
│ ───────────────────────────────────────────────────────── │
│ $ echo "mysecretpassword" | base64 │
│ bXlzZWNyZXRwYXNzd29yZAo= │
│ │
│ $ echo "bXlzZWNyZXRwYXNzd29yZAo=" | base64 -d │
│ mysecretpassword │
│ │
│ Problems with default secrets: │
│ ├── Stored unencrypted in etcd │
│ ├── Visible to anyone with get secrets permission │
│ ├── Appear in pod specs (kubectl describe) │
│ ├── May be logged in audit logs │
│ └── Mounted as plain text files in containers │
│ │
└─────────────────────────────────────────────────────────────┘

Stop and think: You’ve perfectly configured RBAC so no unauthorized users can run kubectl get secrets. However, considering how Kubernetes natively stores and distributes these base64-encoded values, what underlying operational processes (like disaster recovery backups, centralized logging, or node administration) could still expose your plaintext passwords to an attacker who has zero API access?

Terminal window
# From literal values
kubectl create secret generic db-creds \
--from-literal=username=admin \
--from-literal=password=secretpass123
# From files
kubectl create secret generic ssh-key \
--from-file=id_rsa=/path/to/id_rsa \
--from-file=id_rsa.pub=/path/to/id_rsa.pub
# From env file
kubectl create secret generic app-config \
--from-env-file=secrets.env
Terminal window
# Create TLS secret
kubectl create secret tls web-tls \
--cert=server.crt \
--key=server.key
Terminal window
# Create registry credential
kubectl create secret docker-registry regcred \
--docker-server=registry.example.com \
--docker-username=user \
--docker-password=password \
--docker-email=user@example.com

apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: app
image: nginx
env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-creds
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-creds
key: password
apiVersion: v1
kind: Pod
metadata:
name: secret-volume-pod
spec:
containers:
- name: app
image: nginx
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName: db-creds
# Optional: set specific permissions
defaultMode: 0400
┌─────────────────────────────────────────────────────────────┐
│ ENV VARS vs VOLUME MOUNTS │
├─────────────────────────────────────────────────────────────┤
│ │
│ Environment Variables: │
│ ├── Visible in /proc/<pid>/environ │
│ ├── May leak to child processes │
│ ├── Often logged by applications │
│ └── Visible in 'docker inspect' │
│ │
│ Volume Mounts: │
│ ├── Files with restricted permissions │
│ ├── tmpfs (in-memory, not written to disk) │
│ ├── Auto-updated when secret changes │
│ └── Controlled access via file permissions │
│ │
│ Best Practice: Always use volume mounts │
│ │
└─────────────────────────────────────────────────────────────┘

Pause and predict: You mount a secret as an environment variable (env.valueFrom.secretKeyRef) and the application crashes. The crash dump includes environment variables and gets logged to your centralized logging system. Who can now see the secret?

Terminal window
# Check API server configuration
ps aux | grep kube-apiserver | grep encryption-provider-config
# Or check the manifest
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep encryption
/etc/kubernetes/enc/encryption-config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
# aescbc - recommended for production
- aescbc:
keys:
- name: key1
secret: <base64-encoded-32-byte-key>
# identity is the fallback (unencrypted)
- identity: {}
Terminal window
# Generate random 32-byte key
head -c 32 /dev/urandom | base64
# Example output (use your own!):
# K8sSecretEncryptionKey1234567890ABCDEF==
/etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
spec:
containers:
- command:
- kube-apiserver
# Add this flag
- --encryption-provider-config=/etc/kubernetes/enc/encryption-config.yaml
volumeMounts:
# Mount the encryption config
- mountPath: /etc/kubernetes/enc
name: enc
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/enc
type: DirectoryOrCreate
name: enc
Terminal window
# Create a test secret
kubectl create secret generic test-encryption --from-literal=mykey=myvalue
# Read directly from etcd (on control plane)
ETCDCTL_API=3 etcdctl get /registry/secrets/default/test-encryption \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key | hexdump -C
# If encrypted: You'll see random bytes, not readable text
# If NOT encrypted: You'll see "mykey" and "myvalue" in plain text
Terminal window
# After enabling encryption, re-encrypt all existing secrets
kubectl get secrets -A -o json | kubectl replace -f -

┌─────────────────────────────────────────────────────────────┐
│ ENCRYPTION PROVIDERS │
├─────────────────────────────────────────────────────────────┤
│ │
│ identity (default) │
│ └── No encryption, plain storage │
│ │
│ aescbc (recommended) │
│ └── AES-CBC with PKCS#7 padding │
│ Strong, widely supported │
│ │
│ aesgcm │
│ └── AES-GCM authenticated encryption │
│ Faster, must rotate keys every 200K writes │
│ │
│ kms │
│ └── External KMS provider (AWS KMS, Azure Key Vault) │
│ Best for production, keys never touch etcd │
│ │
│ secretbox │
│ └── XSalsa20 + Poly1305 │
│ Strong, fixed nonce size │
│ │
│ Order matters: First provider encrypts new secrets │
│ All listed providers can decrypt │
│ │
└─────────────────────────────────────────────────────────────┘

# Only allow access to specific secrets
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: secret-reader
namespace: production
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["app-config", "db-creds"] # Specific secrets only
verbs: ["get"]
# DON'T DO THIS - grants access to ALL secrets
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: dangerous-role
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"] # Can read ALL secrets cluster-wide!
Terminal window
# Find who can access secrets
kubectl auth can-i get secrets --as=system:serviceaccount:default:default
kubectl auth can-i list secrets --as=system:serviceaccount:kube-system:default
# List all roles that can access secrets
kubectl get clusterroles -o json | jq '.items[] | select(.rules[]?.resources[]? == "secrets") | .metadata.name'

Pause and predict: You enable encryption at rest for secrets using aescbc. You then use etcdctl get to read a secret directly from etcd. Will you see the plain text or encrypted data? What about secrets that were created before you enabled encryption?

apiVersion: v1
kind: Pod
metadata:
name: no-automount-pod
spec:
automountServiceAccountToken: false # Don't mount SA token
containers:
- name: app
image: nginx
apiVersion: v1
kind: Pod
metadata:
name: readonly-secrets
spec:
containers:
- name: app
image: nginx
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true # Prevent modification
volumes:
- name: secrets
secret:
secretName: app-secrets
defaultMode: 0400 # Read-only for owner

Terminal window
# Step 1: Create encryption config directory
sudo mkdir -p /etc/kubernetes/enc
# Step 2: Generate encryption key
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
# Step 3: Create encryption config
sudo tee /etc/kubernetes/enc/encryption-config.yaml << EOF
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
# Step 4: Edit API server manifest
sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
# Add to command:
# - --encryption-provider-config=/etc/kubernetes/enc/encryption-config.yaml
# Add volume mount:
# volumeMounts:
# - mountPath: /etc/kubernetes/enc
# name: enc
# readOnly: true
# Add volume:
# volumes:
# - hostPath:
# path: /etc/kubernetes/enc
# type: DirectoryOrCreate
# name: enc
# Step 5: Wait for API server to restart
kubectl get nodes # Wait until this works
# Step 6: Re-encrypt existing secrets
kubectl get secrets -A -o json | kubectl replace -f -
Terminal window
# Find ServiceAccount with too much secret access
kubectl get rolebindings,clusterrolebindings -A -o json | \
jq -r '.items[] | select(.roleRef.name | contains("secret")) |
"\(.metadata.namespace // "cluster")/\(.metadata.name) -> \(.roleRef.name)"'
# Create restrictive role
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: app-secret-reader
namespace: default
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["app-config"] # Only this secret
verbs: ["get"]
EOF
Terminal window
# Create secret containing certificate
kubectl create secret generic tls-cert \
--from-file=tls.crt=./server.crt \
--from-file=tls.key=./server.key \
-n production
# Use in pod with volume mount
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: secure-app
namespace: production
spec:
containers:
- name: app
image: nginx
volumeMounts:
- name: tls
mountPath: /etc/tls
readOnly: true
volumes:
- name: tls
secret:
secretName: tls-cert
defaultMode: 0400
EOF

┌─────────────────────────────────────────────────────────────┐
│ EXTERNAL SECRETS SOLUTIONS │
├─────────────────────────────────────────────────────────────┤
│ │
│ HashiCorp Vault │
│ └── Industry standard, rich features │
│ Vault Agent Injector for Kubernetes │
│ │
│ AWS Secrets Manager + External Secrets Operator │
│ └── Native AWS integration │
│ Syncs AWS secrets to Kubernetes │
│ │
│ Azure Key Vault │
│ └── Azure-native solution │
│ CSI driver available │
│ │
│ Sealed Secrets (Bitnami) │
│ └── Encrypt secrets for Git storage │
│ Only cluster can decrypt │
│ │
│ Note: External solutions are NOT on CKS exam │
│ but understanding them shows security maturity │
│ │
└─────────────────────────────────────────────────────────────┘

  • Base64 is just encoding, not encryption. Anyone can decode it. The CKS exam tests whether you understand this critical distinction.

  • etcd stores secrets in plain text by default. Without encryption at rest, anyone with etcd access can read all cluster secrets.

  • Secrets mounted as volumes are stored in tmpfs (memory), not on disk. They’re more secure than environment variables.

  • The encryption config order matters. New secrets are encrypted with the first provider. All listed providers can decrypt, allowing key rotation.


MistakeWhy It HurtsSolution
Thinking base64 is secureData exposedEnable encryption at rest
Using env vars for secretsLeaks to logsUse volume mounts
Broad RBAC for secretsAny pod can readUse resourceNames
Not re-encrypting after enablingOld secrets unencryptedRun kubectl replace
Secrets in GitPermanent exposureUse Sealed Secrets

  1. A junior developer commits a Kubernetes Secret manifest to Git. The manifest contains data: password: bXlwYXNzd29yZA==. They say “it’s fine, the password is encrypted.” Why is this a security incident, and what’s the immediate remediation?

    Answer Base64 is encoding, not encryption -- anyone can decode it (`echo "bXlwYXNzd29yZA==" | base64 -d` reveals "mypassword"). This is a credential leak. Immediate remediation: (1) Rotate the compromised password immediately. (2) Remove the secret from Git history (not just the latest commit -- use `git filter-branch` or BFG Repo Cleaner). (3) Consider the password permanently compromised since Git history persists in forks and caches. Prevention: use SealedSecrets or SOPS to encrypt secrets before committing, or use external secret managers (Vault, AWS Secrets Manager) that store references rather than values.
  2. During a security audit, you discover that application pods use env.valueFrom.secretKeyRef to inject database passwords. The auditor flags this as a risk. The developer says “environment variables are standard practice.” Who is right, and what’s the concrete attack scenario?

    Answer The auditor is right. Environment variables are visible in `/proc//environ`, can leak to child processes, appear in crash dumps, and are often captured in logging systems and error reporting tools. Concrete attack: if the application crashes and the error handler logs environment variables (common in frameworks like Django, Rails), the database password ends up in the logging system accessible to anyone with log access. Volume mounts are preferred because they're stored in tmpfs (memory-only), respect file permissions, auto-update when secrets change, and don't leak through `/proc` or crash dumps. Mount secrets as files and read them at runtime.
  3. You enable encryption at rest for secrets with the aescbc provider. A compliance auditor asks you to prove all secrets are encrypted in etcd. You run etcdctl get /registry/secrets/default/db-password and see encrypted data. But when you check /registry/secrets/kube-system/coredns-token, you see plain text. What happened?

    Answer Enabling encryption at rest only affects newly created or updated secrets. Existing secrets created before encryption was enabled remain stored in plain text. The `db-password` was created after encryption, so it's encrypted. The `coredns-token` existed before and was never re-written. Fix: re-encrypt all existing secrets by reading and replacing them: `kubectl get secrets -A -o json | kubectl replace -f -`. This forces each secret to be re-written through the API server, which now encrypts them. Always verify with `etcdctl` after re-encryption. The `identity` provider in the encryption config serves as a fallback to read these old unencrypted secrets.
  4. Your cluster stores database credentials, API keys, and TLS certificates as Kubernetes Secrets. An attacker gains get secrets RBAC permission in the production namespace. What is the blast radius, and what layers of defense should have limited it?

    Answer Blast radius: the attacker can read every secret in the `production` namespace -- all database passwords, API keys, and TLS private keys. They can decode base64 values instantly. Defense layers that should have limited this: (1) Use `resourceNames` in RBAC to restrict access to specific secrets, not all secrets in the namespace. (2) Enable encryption at rest so secrets are encrypted in etcd backups. (3) Use an external secrets manager (Vault, AWS Secrets Manager) so Kubernetes only stores references, not actual values. (4) Mount secrets as volumes (not env vars) to limit exposure paths. (5) Audit secret access with audit logging to detect unauthorized reads. No single layer is sufficient -- secrets management requires defense in depth.

Task: Enable encryption at rest and verify it works.

Terminal window
# Step 1: Check current encryption status
ps aux | grep kube-apiserver | grep encryption-provider-config || echo "Not configured"
# Step 2: Create test secret BEFORE encryption
kubectl create secret generic pre-encryption --from-literal=test=beforeencryption
# Step 3: Create encryption config (on control plane node)
sudo mkdir -p /etc/kubernetes/enc
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
sudo tee /etc/kubernetes/enc/encryption-config.yaml << EOF
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
# Step 4: Backup API server manifest
sudo cp /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/kube-apiserver.yaml.bak
# Step 5: Edit API server manifest (add encryption config)
# Add: --encryption-provider-config=/etc/kubernetes/enc/encryption-config.yaml
# Add volume and volumeMount for /etc/kubernetes/enc
# Step 6: Wait for API server restart
sleep 30
kubectl get nodes
# Step 7: Create test secret AFTER encryption
kubectl create secret generic post-encryption --from-literal=test=afterencryption
# Step 8: Re-encrypt pre-existing secret
kubectl get secret pre-encryption -o json | kubectl replace -f -
# Step 9: Verify in etcd (if you have access)
# Encrypted secrets show random bytes, not plain text
# Cleanup
kubectl delete secret pre-encryption post-encryption

Success criteria: Understand encryption configuration and verification.


Secret Security Problems:

  • Base64 is NOT encryption
  • etcd stores plain text by default
  • Environment variables leak

Best Practices:

  • Enable encryption at rest (aescbc)
  • Use volume mounts, not env vars
  • Restrict RBAC with resourceNames
  • Re-encrypt after enabling encryption

Encryption Setup:

  • Create EncryptionConfiguration
  • Add API server flag
  • Restart API server
  • Re-encrypt existing secrets

Exam Tips:

  • Know encryption config format
  • Understand provider order
  • Be able to verify encryption works

Module 4.4: Runtime Sandboxing - gVisor and Kata Containers for container isolation.