Skip to content

Module 0.2: Security Lab Setup

Hands-On Lab Available
K8s Cluster advanced 35 min
Launch Lab ↗

Opens in Killercoda in a new tab

Complexity: [MEDIUM] - Multiple tools to install

Time to Complete: 45-60 minutes

Prerequisites: Working Kubernetes cluster (from CKA), kubectl configured


After completing this module, you will be able to:

  1. Deploy a security-focused Kubernetes lab with Trivy, Falco, and kube-bench installed
  2. Configure cluster components for security testing and vulnerability scanning
  3. Diagnose common lab setup issues with security tool installations
  4. Create reproducible lab environments for practicing CKS exam scenarios

CKS requires hands-on practice with security tools. You can’t learn Trivy from documentation alone—you need to scan images, see vulnerabilities, and practice remediation. Same with Falco: writing rules requires a running instance generating alerts.

This module builds your security lab: a cluster equipped with all tools you’ll encounter on the exam.


┌─────────────────────────────────────────────────────────────┐
│ CKS SECURITY LAB │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Kubernetes Cluster │ │
│ │ │ │
│ │ Security Tools Deployed: │ │
│ │ ┌─────────┐ ┌─────────┐ ┌───────────┐ │ │
│ │ │ Falco │ │ Trivy │ │ kube-bench│ │ │
│ │ │(runtime)│ │(scanner)│ │(CIS audit)│ │ │
│ │ └─────────┘ └─────────┘ └───────────┘ │ │
│ │ │ │
│ │ Security Features Enabled: │ │
│ │ ┌─────────┐ ┌─────────┐ ┌───────────┐ │ │
│ │ │AppArmor │ │ seccomp │ │ Audit │ │ │
│ │ │profiles │ │profiles │ │ Logging │ │ │
│ │ └─────────┘ └─────────┘ └───────────┘ │ │
│ │ │ │
│ │ Vulnerable Apps (for practice): │ │
│ │ ┌─────────────────────────────────────────┐ │ │
│ │ │ Intentionally insecure deployments │ │ │
│ │ │ for scanning and hardening practice │ │ │
│ │ └─────────────────────────────────────────┘ │ │
│ │ │ │
│ └─────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘

Stop and think: Why do you think audit logging is not enabled by default in Kubernetes? What trade-off is being made, and why is enabling it essential for CKS?

Section titled “Option 1: Kind Cluster (Recommended for Learning)”

For most CKS study, a kind cluster with security tools works well:

Terminal window
# Create kind cluster with audit logging enabled
cat <<EOF > kind-cks.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: ClusterConfiguration
apiServer:
extraArgs:
audit-policy-file: /etc/kubernetes/audit-policy.yaml
audit-log-path: /var/log/kubernetes/audit.log
audit-log-maxage: "30"
audit-log-maxbackup: "3"
audit-log-maxsize: "100"
extraVolumes:
- name: audit-policy
hostPath: /etc/kubernetes/audit-policy.yaml
mountPath: /etc/kubernetes/audit-policy.yaml
readOnly: true
pathType: File
- name: audit-logs
hostPath: /var/log/kubernetes
mountPath: /var/log/kubernetes
pathType: DirectoryOrCreate
extraMounts:
- hostPath: ./audit-policy.yaml
containerPath: /etc/kubernetes/audit-policy.yaml
readOnly: true
- hostPath: ./audit-logs
containerPath: /var/log/kubernetes
- role: worker
- role: worker
EOF
# Create the audit log directory on the host
mkdir -p audit-logs
# Create basic audit policy
cat <<EOF > audit-policy.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
resources:
- group: ""
resources: ["secrets", "configmaps"]
- level: Request
resources:
- group: ""
resources: ["pods"]
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: ""
resources: ["endpoints", "services"]
- level: Metadata
omitStages:
- RequestReceived
EOF
# Create the cluster
kind create cluster --name cks-lab --config kind-cks.yaml

Option 2: Kubeadm Cluster (Closer to Exam)

Section titled “Option 2: Kubeadm Cluster (Closer to Exam)”

If you have a kubeadm cluster from CKA practice, add security configurations:

Terminal window
# Enable audit logging on existing cluster
# Edit /etc/kubernetes/manifests/kube-apiserver.yaml on control plane
# Add these flags to the API server:
# --audit-policy-file=/etc/kubernetes/audit-policy.yaml
# --audit-log-path=/var/log/kubernetes/audit.log
# --audit-log-maxage=30
# --audit-log-maxbackup=3
# --audit-log-maxsize=100
# Create the audit policy file
sudo mkdir -p /etc/kubernetes
sudo tee /etc/kubernetes/audit-policy.yaml <<EOF
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
resources:
- group: ""
resources: ["secrets", "configmaps"]
- level: RequestResponse
resources:
- group: ""
resources: ["pods"]
verbs: ["create", "delete"]
- level: Metadata
omitStages:
- RequestReceived
EOF
# Create log directory
sudo mkdir -p /var/log/kubernetes

Terminal window
# Install Trivy CLI
# On Ubuntu/Debian
sudo apt-get install wget apt-transport-https gnupg lsb-release -y
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy -y
# On macOS
brew install trivy
# Verify installation
trivy --version
# Test scan
trivy image nginx:latest
Terminal window
# Install Falco using Helm
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
# Install Falco with modern eBPF driver
helm install falco falcosecurity/falco \
--namespace falco \
--create-namespace \
--set driver.kind=modern_ebpf \
--set falcosidekick.enabled=true \
--set falcosidekick.webui.enabled=true
# For kind clusters, use kernel module driver instead
# helm install falco falcosecurity/falco \
# --namespace falco \
# --create-namespace \
# --set driver.kind=kmod
# Verify Falco is running
kubectl get pods -n falco
# Check Falco logs
kubectl logs -n falco -l app.kubernetes.io/name=falco
Terminal window
# Run kube-bench as a job
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml
# Wait for completion
kubectl wait --for=condition=complete job/kube-bench --timeout=120s
# View results
kubectl logs job/kube-bench
# For detailed output, run interactively on control plane node
# Download and run kube-bench directly
curl -L https://github.com/aquasecurity/kube-bench/releases/download/v0.7.0/kube-bench_0.7.0_linux_amd64.tar.gz -o kube-bench.tar.gz
tar -xvf kube-bench.tar.gz
./kube-bench run --targets=master
Terminal window
# Install kubesec
# Binary installation
wget https://github.com/controlplaneio/kubesec/releases/download/v2.14.0/kubesec_linux_amd64.tar.gz
tar -xvf kubesec_linux_amd64.tar.gz
sudo mv kubesec /usr/local/bin/
# Or use Docker
# docker run -i kubesec/kubesec scan /dev/stdin < deployment.yaml
# Test kubesec
cat <<EOF | kubesec scan /dev/stdin
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: nginx
securityContext:
runAsUser: 0
EOF

Terminal window
# Check if AppArmor is enabled (on nodes)
cat /sys/module/apparmor/parameters/enabled
# Should output: Y
# List loaded profiles
sudo aa-status
# Check if container runtime supports AppArmor
# For containerd, it's enabled by default

Pause and predict: You install Falco on a kind cluster using the modern_ebpf driver, but the cluster node’s kernel doesn’t support eBPF CO-RE. What symptoms would you see and how would you fix it?

Terminal window
# Check kernel seccomp support
grep CONFIG_SECCOMP /boot/config-$(uname -r)
# Should see: CONFIG_SECCOMP=y
# Kubernetes default seccomp profile location
ls /var/lib/kubelet/seccomp/
# Create a test seccomp profile directory
sudo mkdir -p /var/lib/kubelet/seccomp/profiles

Pause and predict: You’re about to deploy intentionally insecure containers. In a real cluster, what would prevent these from being created? Think about which CKS topics (Pod Security Admission, image scanning, RBAC) would block each one.

Create intentionally insecure deployments for practice:

Terminal window
# Create namespace for practice
kubectl create namespace insecure-apps
# Deploy vulnerable app 1: Privileged container
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: privileged-pod
namespace: insecure-apps
spec:
containers:
- name: nginx
image: nginx:1.25
securityContext:
privileged: true
EOF
# Deploy vulnerable app 2: Root user
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: root-pod
namespace: insecure-apps
spec:
containers:
- name: nginx
image: nginx:1.25
securityContext:
runAsUser: 0
EOF
# Deploy vulnerable app 3: No resource limits
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: unlimited-pod
namespace: insecure-apps
spec:
containers:
- name: nginx
image: nginx:1.25
# No resources specified = unlimited
EOF
# Deploy vulnerable app 4: Vulnerable image
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: vulnerable-image
namespace: insecure-apps
spec:
containers:
- name: app
image: vulnerables/web-dvwa # Known vulnerable image
EOF

Run this to verify your lab is ready:

#!/bin/bash
echo "=== CKS Lab Validation ==="
echo ""
# Check cluster
echo "1. Cluster Status:"
kubectl cluster-info | head -2
echo ""
# Check Trivy
echo "2. Trivy:"
if command -v trivy &> /dev/null; then
trivy --version
else
echo " NOT INSTALLED"
fi
echo ""
# Check Falco
echo "3. Falco:"
kubectl get pods -n falco -l app.kubernetes.io/name=falco --no-headers 2>/dev/null | head -1 || echo " NOT RUNNING"
echo ""
# Check kube-bench
echo "4. kube-bench:"
if command -v kube-bench &> /dev/null; then
echo " Installed"
else
echo " Available as Job"
fi
echo ""
# Check AppArmor
echo "5. AppArmor:"
if [ -f /sys/module/apparmor/parameters/enabled ]; then
cat /sys/module/apparmor/parameters/enabled
else
echo " Check on cluster nodes"
fi
echo ""
# Check Audit Logging
echo "6. Audit Logging:"
kubectl get pods -n kube-system kube-apiserver-* -o yaml 2>/dev/null | grep -q "audit-log-path" && echo " Enabled" || echo " Check API server config"
echo ""
echo "=== Validation Complete ==="

  • Falco can detect cryptomining in real-time by monitoring for suspicious CPU patterns and network connections to mining pools.

  • Trivy scans more than images—it can scan filesystems, git repositories, and Kubernetes manifests for misconfigurations.

  • The CIS Kubernetes Benchmark has over 200 checks. kube-bench automates all of them.

  • AppArmor and SELinux are alternatives—most Kubernetes environments use AppArmor (Ubuntu default) or SELinux (RHEL/CentOS default). CKS focuses on AppArmor.


MistakeWhy It HurtsSolution
No audit logging enabledCan’t practice audit-related tasksConfigure API server with audit policy
Falco not runningCan’t practice runtime detectionInstall via Helm, check driver
Only scanning images onceNeed workflow practiceIntegrate into routine
Skipping vulnerable app setupNo targets to practice hardeningDeploy intentionally insecure apps
Not checking node-level toolsAppArmor/seccomp are node featuresSSH to nodes, verify support

  1. You run trivy image nginx:latest and get over 140 vulnerabilities. Your manager panics and says to switch to Alpine-based images immediately. Is this the right response, and what would you actually do?

    Answer Switching to Alpine may reduce the vulnerability count since Alpine has fewer packages, but it is not a complete or guaranteed solution. The right approach is to filter by severity using `trivy image --severity HIGH,CRITICAL nginx:latest` and focus on fixing CRITICAL and HIGH vulnerabilities first. You should also check if patched versions of the specific vulnerable packages exist before swapping the entire base image. Furthermore, consider adopting distroless or truly minimal base images if the application supports it. Trivy scans images, filesystems, git repos, and Kubernetes manifests, so you must use all these capabilities to build a comprehensive security posture rather than just reacting to raw vulnerability counts.
  2. During CKS lab setup, you create a custom seccomp profile and place it in /etc/seccomp/profiles/ on the node. When you reference it in a pod spec, the pod fails to start with a create error. What went wrong?

    Answer Kubernetes expects custom seccomp profiles to be located in `/var/lib/kubelet/seccomp/` on the node where the pod is scheduled to run, rather than the standard OS path `/etc/seccomp/profiles/`. The kubelet constructs the profile path relative to its own configured seccomp directory. Because the profile was placed in the wrong directory, the container runtime could not find the file and refused to start the container. Moving the JSON profile to the correct path, specifically `/var/lib/kubelet/seccomp/profiles/`, and updating the pod spec to reference `localhost/profiles/your-profile.json` will resolve the issue.
  3. Your security team notices an alert and asks why you are deploying a container image called vulnerables/web-dvwa into the cluster. Explain the security rationale for intentionally deploying vulnerable applications.

    Answer Intentionally vulnerable applications serve as highly realistic practice targets in an isolated lab environment. They allow you to practice scanning with Trivy to find real CVEs and configuring Falco to detect actual runtime exploitation attempts. These applications also give you a baseline to practice hardening techniques using security contexts, Pod Security Admission, and NetworkPolicies. Without these realistic, flawed targets, you cannot adequately simulate the remediation workflows that the CKS exam tests. The critical mitigating factor is that these deployments are strictly confined to a dedicated, isolated lab namespace and never allowed in a production cluster.
  4. You install Falco via Helm with driver.kind=modern_ebpf on your lab cluster, but the Falco pods are stuck in CrashLoopBackOff. The logs mention “kernel version not supported.” How do you diagnose and fix this?

    Answer The modern eBPF driver requires a Linux kernel version that supports eBPF CO-RE (Compile Once - Run Everywhere), which is typically found in kernels 5.8 or newer. If your cluster node's kernel is older or lacks these specific features, Falco cannot successfully load its modern eBPF probe, resulting in the crash. To fix this, you must switch Falco to an alternative driver that is compatible with your environment. You can use the kernel module driver by running `helm upgrade falco falcosecurity/falco --set driver.kind=kmod -n falco`, which requires kernel headers, or fall back to the classic eBPF driver. For environments like kind clusters, the kernel module driver is frequently the most reliable choice.

Task: Validate your security lab setup.

Terminal window
# 1. Verify cluster is running
kubectl get nodes
# 2. Install Trivy and scan an image
trivy image nginx:latest | head -50
# 3. Check Falco is running (if installed)
kubectl get pods -n falco
# 4. Run kube-bench
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml
kubectl wait --for=condition=complete job/kube-bench --timeout=120s
kubectl logs job/kube-bench | head -100
# 5. Create a test pod and scan it
kubectl run test-pod --image=nginx:1.25
trivy image nginx:1.25
# 6. Cleanup
kubectl delete pod test-pod
kubectl delete job kube-bench

Success criteria: Trivy scans images, kube-bench reports results, cluster is accessible.


Your CKS lab needs:

Tools installed:

  • Trivy (image scanning)
  • Falco (runtime detection)
  • kube-bench (CIS benchmarks)
  • kubesec (static analysis)

Cluster features enabled:

  • Audit logging
  • AppArmor support (node-level)
  • Seccomp support (node-level)

Practice targets:

  • Intentionally vulnerable deployments
  • Known-vulnerable images

This lab environment lets you practice every CKS exam domain hands-on.


Module 0.3: Security Tool Mastery - Deep dive into Trivy, Falco, and kube-bench usage.