Skip to content

Module 5.3: Static Analysis with kubesec and OPA

Hands-On Lab Available
K8s Cluster advanced 35 min
Launch Lab ↗

Opens in Killercoda in a new tab

Complexity: [MEDIUM] - Security tooling

Time to Complete: 45-50 minutes

Prerequisites: Module 5.2 (Image Scanning), Kubernetes manifest basics


After completing this module, you will be able to:

  1. Audit Kubernetes manifests using kubesec to identify security misconfigurations
  2. Write OPA Rego policies to enforce custom security rules at admission time
  3. Deploy OPA Gatekeeper ConstraintTemplates and Constraints for policy enforcement
  4. Evaluate static analysis tool output to prioritize security fixes before deployment

Static analysis examines Kubernetes manifests before deployment, catching misconfigurations early. Tools like kubesec score security posture, while OPA Gatekeeper enforces policies at admission time.

CKS tests both ad-hoc analysis (kubesec) and policy enforcement (OPA).


┌─────────────────────────────────────────────────────────────┐
│ STATIC ANALYSIS PIPELINE │
├─────────────────────────────────────────────────────────────┤
│ │
│ Developer writes YAML │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Static Analysis (Pre-commit/CI) │ │
│ │ ├── kubesec (security scoring) │ │
│ │ ├── Trivy (misconfiguration) │ │
│ │ ├── kube-linter (best practices) │ │
│ │ └── Checkov (policy as code) │ │
│ └─────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Admission Controllers (Deploy time) │ │
│ │ ├── OPA Gatekeeper │ │
│ │ ├── Kyverno │ │
│ │ └── Pod Security Admission │ │
│ └─────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ Kubernetes API Server accepts/rejects │
│ │
└─────────────────────────────────────────────────────────────┘

Stop and think: Image scanning finds CVEs in dependencies. Static analysis finds misconfigurations in your YAML. Which one catches a deployment with privileged: true and no security context? Which catches a vulnerable version of openssl? Why do you need both?

kubesec analyzes Kubernetes manifests and assigns a security score.

Terminal window
# Download binary
wget https://github.com/controlplaneio/kubesec/releases/download/v2.13.0/kubesec_linux_amd64.tar.gz
tar -xzf kubesec_linux_amd64.tar.gz
sudo mv kubesec /usr/local/bin/
# Or use Docker
docker run -i kubesec/kubesec:v2 scan /dev/stdin < pod.yaml
# Or use online API
curl -sSX POST --data-binary @pod.yaml https://v2.kubesec.io/scan
Terminal window
# Scan a file
kubesec scan pod.yaml
# Scan from stdin
cat pod.yaml | kubesec scan -
# Scan multiple files
kubesec scan deployment.yaml service.yaml
[
{
"object": "Pod/insecure-pod.default",
"valid": true,
"score": -30,
"scoring": {
"critical": [
{
"selector": "containers[] .securityContext .privileged == true",
"reason": "Privileged containers can allow almost completely unrestricted host access",
"points": -30
}
],
"advise": [
{
"selector": "containers[] .securityContext .runAsNonRoot == true",
"reason": "Force the running image to run as a non-root user",
"points": 1
},
{
"selector": ".spec .serviceAccountName",
"reason": "Service accounts restrict Kubernetes API access",
"points": 3
}
]
}
}
]
┌─────────────────────────────────────────────────────────────┐
│ KUBESEC SCORING │
├─────────────────────────────────────────────────────────────┤
│ │
│ CRITICAL (negative points): │
│ ───────────────────────────────────────────────────────── │
│ • privileged: true → -30 points │
│ • hostNetwork: true → -9 points │
│ • hostPID: true → -9 points │
│ • hostIPC: true → -9 points │
│ • capabilities.add: SYS_ADMIN → -30 points │
│ │
│ POSITIVE (security improvements): │
│ ───────────────────────────────────────────────────────── │
│ • runAsNonRoot: true → +1 point │
│ • runAsUser > 10000 → +1 point │
│ • readOnlyRootFilesystem: true → +1 point │
│ • capabilities.drop: ALL → +1 point │
│ • resources.limits.cpu → +1 point │
│ • resources.limits.memory → +1 point │
│ │
│ Score > 0: Generally acceptable │
│ Score < 0: Critical issues present │
│ │
└─────────────────────────────────────────────────────────────┘

insecure-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: insecure
spec:
containers:
- name: app
image: nginx
securityContext:
privileged: true
Terminal window
kubesec scan insecure-pod.yaml
# Score: -30 (CRITICAL: privileged container)
secure-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: secure
spec:
securityContext:
runAsNonRoot: true
runAsUser: 10001
containers:
- name: app
image: nginx
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
resources:
limits:
memory: "128Mi"
cpu: "500m"
Terminal window
kubesec scan secure-pod.yaml
# Score: 7+ (multiple security best practices)

KubeLinter is a static analysis tool that checks Kubernetes manifests against best practices and common misconfigurations. It’s faster and more opinionated than kubesec, focusing on deployment safety.

Terminal window
# Install
curl -sL https://github.com/stackrox/kube-linter/releases/latest/download/kube-linter-linux -o kube-linter
chmod +x kube-linter
# Lint a manifest
./kube-linter lint deployment.yaml
# Lint an entire directory
./kube-linter lint manifests/
# List all available checks
./kube-linter checks list

KubeLinter catches issues like:

  • Containers running as root
  • No resource limits set
  • No readiness/liveness probes
  • Writable root filesystems
  • Privileged containers
  • Missing network policies
Terminal window
# Example output
deployment.yaml: (object: default/nginx apps/v1, Kind=Deployment)
- container "nginx" does not have a read-only root file system
(check: no-read-only-root-fs, remediation: Set readOnlyRootFilesystem to true)
- container "nginx" has cpu limit 0 (check: unset-cpu-requirements)
- container "nginx" is not set to runAsNonRoot (check: run-as-non-root)

kubesec vs KubeLinter: kubesec scores overall security posture (good for audits). KubeLinter catches specific issues with actionable remediations (good for CI pipelines). Use both.


What would happen if: You deploy OPA Gatekeeper with a constraint that requires all pods to have resource limits. Existing pods without limits continue running, but no new pods without limits can be created. A deployment scales up — will the new replicas be created?

Open Policy Agent (OPA) Gatekeeper provides policy enforcement at admission time.

┌─────────────────────────────────────────────────────────────┐
│ OPA GATEKEEPER ARCHITECTURE │
├─────────────────────────────────────────────────────────────┤
│ │
│ kubectl apply -f pod.yaml │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Kubernetes API Server │ │
│ │ │ │ │
│ │ ValidatingWebhook │ │
│ │ │ │ │
│ └────────────────────┼────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ OPA Gatekeeper │ │
│ │ ┌─────────────────────────────────────────────┐ │ │
│ │ │ ConstraintTemplate (defines policy) │ │ │
│ │ │ e.g., "K8sRequiredLabels" │ │ │
│ │ └─────────────────────────────────────────────┘ │ │
│ │ ┌─────────────────────────────────────────────┐ │ │
│ │ │ Constraint (applies policy) │ │ │
│ │ │ e.g., "require label: team" │ │ │
│ │ └─────────────────────────────────────────────┘ │ │
│ └─────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ Allow or Deny request │
│ │
└─────────────────────────────────────────────────────────────┘
Terminal window
# Install using kubectl
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/release-3.14/deploy/gatekeeper.yaml
# Verify installation
kubectl get pods -n gatekeeper-system
kubectl get crd | grep gatekeeper
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
validation:
openAPIV3Schema:
type: object
properties:
labels:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
violation[{"msg": msg}] {
provided := {label | input.review.object.metadata.labels[label]}
required := {label | label := input.parameters.labels[_]}
missing := required - provided
count(missing) > 0
msg := sprintf("Missing required labels: %v", [missing])
}

To write custom policies, you need to understand how Gatekeeper evaluates Rego. Let’s break down the rego block from the template above line-by-line:

  • package k8srequiredlabels: Defines the namespace for your Rego code. It must match the name of the ConstraintTemplate.
  • violation[{"msg": msg}] {: This is the entrypoint Gatekeeper looks for. If all statements inside the curly braces evaluate to true, a violation is generated.
  • provided := {label | input.review.object.metadata.labels[label]}: Extracts the labels from the resource being evaluated. input.review.object represents the incoming Kubernetes API request payload (the YAML you are applying).
  • required := {label | label := input.parameters.labels[_]}: Extracts the required labels passed from the Constraint’s parameters block.
  • missing := required - provided: Uses Rego’s built-in set operations to find required labels that are not present on the object.
  • count(missing) > 0: The actual condition. If the number of missing labels is greater than zero, the condition is true, and the evaluation continues. If false, evaluation stops, and no violation occurs.
  • msg := sprintf("Missing required labels: %v", [missing]): Formats the error message that will be returned to the user who attempted to apply the manifest.
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: require-team-label
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
namespaces: ["production"]
parameters:
labels: ["team", "app"]
Terminal window
# This pod will be rejected
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: unlabeled-pod
namespace: production
spec:
containers:
- name: nginx
image: nginx
EOF
# Error: Missing required labels: {"app", "team"}
# This pod will be allowed
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: labeled-pod
namespace: production
labels:
team: platform
app: web
spec:
containers:
- name: nginx
image: nginx
EOF

apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8sblockprivileged
spec:
crd:
spec:
names:
kind: K8sBlockPrivileged
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sblockprivileged
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
container.securityContext.privileged == true
msg := sprintf("Privileged container not allowed: %v", [container.name])
}
---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sBlockPrivileged
metadata:
name: block-privileged-containers
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8srequirenonroot
spec:
crd:
spec:
names:
kind: K8sRequireNonRoot
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequirenonroot
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not container.securityContext.runAsNonRoot
msg := sprintf("Container must set runAsNonRoot: %v", [container.name])
}
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8sblocklatesttag
spec:
crd:
spec:
names:
kind: K8sBlockLatestTag
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sblocklatesttag
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
endswith(container.image, ":latest")
msg := sprintf("Image with :latest tag not allowed: %v", [container.image])
}
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not contains(container.image, ":")
msg := sprintf("Image without tag (defaults to :latest) not allowed: %v", [container.image])
}

Pause and predict: You scan a pod manifest with kubesec and get a score of +7. Your colleague scans a nearly identical manifest that adds privileged: true and gets -23. The score dropped by 30 points from a single field. Why does kubesec weight this so heavily?

Terminal window
# Create test pod
cat <<EOF > test-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: web
spec:
containers:
- name: nginx
image: nginx
securityContext:
privileged: true
EOF
# Scan with kubesec
kubesec scan test-pod.yaml
# Fix the pod based on kubesec recommendations
cat <<EOF > test-pod-fixed.yaml
apiVersion: v1
kind: Pod
metadata:
name: web
spec:
securityContext:
runAsNonRoot: true
runAsUser: 10001
containers:
- name: nginx
image: nginx
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
resources:
limits:
memory: "128Mi"
cpu: "500m"
EOF
kubesec scan test-pod-fixed.yaml
# Score should be positive now
Terminal window
# Create ConstraintTemplate
cat <<EOF | kubectl apply -f -
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8srequirelimits
spec:
crd:
spec:
names:
kind: K8sRequireLimits
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequirelimits
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not container.resources.limits.memory
msg := sprintf("Container must have memory limits: %v", [container.name])
}
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not container.resources.limits.cpu
msg := sprintf("Container must have CPU limits: %v", [container.name])
}
EOF
# Create Constraint
cat <<EOF | kubectl apply -f -
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequireLimits
metadata:
name: require-resource-limits
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
namespaces: ["production"]
EOF
# Test - this should fail
kubectl run test --image=nginx -n production
# Error: Container must have memory limits
# Test - this should succeed
kubectl run test --image=nginx -n production \
--limits='memory=128Mi,cpu=500m'
Terminal window
# Check constraint status for violations
kubectl get k8srequiredlabels require-team-label -o yaml
# Look at the status.violations section
kubectl get constraints -A -o json | \
jq '.items[] | {name: .metadata.name, violations: .status.totalViolations}'

  • kubesec was created by Control Plane (formerly Aqua) and is specifically designed for Kubernetes security scoring.

  • OPA uses Rego, a purpose-built policy language. It’s declarative and designed for expressing complex access control policies.

  • Gatekeeper operates as a ValidatingAdmissionWebhook, which means it can only allow or deny requests—it can’t modify them. For mutation, use MutatingAdmissionWebhooks.

  • Gatekeeper supports audit mode, which reports violations without blocking them. Great for rolling out new policies.


MistakeWhy It HurtsSolution
Ignoring kubesec warningsDeployments have known issuesAddress critical findings
Complex Rego policiesHard to debug and maintainStart simple, test thoroughly
No exemptionsSystem pods blockedUse match.excludedNamespaces
Audit mode forgottenViolations not enforcedChange to enforce after testing
Missing error messagesUsers confusedInclude clear violation messages

  1. A developer submits a deployment manifest for review. You scan it with kubesec and get a score of -30. Without seeing the manifest, what configuration is almost certainly present, and what does the score tell you about deployment readiness?

    Answer A score of -30 almost certainly means `privileged: true` is set (it carries a -30 penalty alone). Other possibilities include `hostNetwork: true` or `hostPID: true` combined with other issues. The score tells you this manifest has critical security misconfigurations that enable container escape and full host access -- it should never be deployed to production. kubesec scores: positive means reasonable security posture, zero means minimal controls, negative means critical issues. A -30 is the worst category and indicates the deployment needs fundamental security redesign before proceeding.
  2. Your organization uses OPA Gatekeeper to enforce that all pods must have resource limits. A deployment with 5 replicas is running without limits (created before the policy). The deployment scales to 10 replicas. What happens to the 5 new pods?

    Answer The 5 new pods are blocked by Gatekeeper. Admission controllers only validate new requests -- they don't retroactively enforce on existing resources. The existing 5 pods continue running without limits, but any new pod creation (including scale-up, rolling updates, or pod restarts) is blocked until the deployment spec includes resource limits. This is why Gatekeeper's audit feature is important: it identifies existing resources that violate policies. Use `enforcementAction: dryrun` during rollout to identify all violations before switching to `deny` enforcement.
  3. During a CKS exam, you need to create a Gatekeeper policy that blocks images from untrusted registries. You write the ConstraintTemplate in Rego and apply it, but the Constraint you create doesn’t seem to block anything. kubectl get constraints shows the constraint exists. What’s the most common reason it’s not working?

    Answer The most common issues: (1) The Constraint's `match` field doesn't select the right resources -- check that `kinds` includes `Pod` (with the `""` API group) and optionally `Deployment`, `StatefulSet`, etc. (2) The namespace where you're testing is excluded in the Constraint's `excludedNamespaces`. (3) The ConstraintTemplate has a Rego syntax error -- check `kubectl get constrainttemplate -o yaml` for status errors. (4) The Constraint's `parameters` don't match what the Rego code expects. (5) Gatekeeper pods aren't running -- check `kubectl get pods -n gatekeeper-system`. Debug by creating a pod that should be blocked and checking `kubectl describe` for admission rejection messages.
  4. Your team runs kubesec in CI/CD and blocks deployments with score below 0. A developer argues this is too strict because some legitimate workloads need NET_ADMIN capability (for a service mesh sidecar), which lowers the score. How do you balance security policy with legitimate needs?

    Answer Use a tiered approach: (1) Block score below -10 (critical issues like `privileged`) unconditionally -- no exceptions. (2) For scores between -10 and 0, require a security review and documented exception. (3) Use kubesec's JSON output to check specific critical findings rather than just the aggregate score. (4) Create namespace-specific policies: service mesh namespaces can have different kubesec thresholds than application namespaces. (5) Combine kubesec with Gatekeeper for more granular control -- Gatekeeper can enforce "NET_ADMIN only if the pod has a specific label" rather than a blanket score threshold. The goal is blocking truly dangerous configs while allowing justified exceptions with audit trails.

Task: Use kubesec and create a Gatekeeper policy.

Terminal window
# Part 1: kubesec Analysis
# Create insecure pod
cat <<EOF > insecure.yaml
apiVersion: v1
kind: Pod
metadata:
name: insecure
spec:
containers:
- name: app
image: nginx
securityContext:
privileged: true
EOF
# Scan with kubesec (using API if not installed locally)
echo "=== kubesec Scan (Insecure) ==="
curl -sSX POST --data-binary @insecure.yaml https://v2.kubesec.io/scan | jq '.[0].score'
# Create secure version
cat <<EOF > secure.yaml
apiVersion: v1
kind: Pod
metadata:
name: secure
spec:
securityContext:
runAsNonRoot: true
runAsUser: 10001
containers:
- name: app
image: nginx
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
resources:
limits:
memory: "128Mi"
cpu: "500m"
EOF
echo "=== kubesec Scan (Secure) ==="
curl -sSX POST --data-binary @secure.yaml https://v2.kubesec.io/scan | jq '.[0].score'
# Part 2: Gatekeeper Policy (if Gatekeeper is installed)
# Check if Gatekeeper is installed
if kubectl get crd constrainttemplates.templates.gatekeeper.sh &>/dev/null; then
echo "=== Creating Gatekeeper Policy ==="
# Create ConstraintTemplate
cat <<EOF | kubectl apply -f -
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8sblockdefaultnamespace
spec:
crd:
spec:
names:
kind: K8sBlockDefaultNamespace
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sblockdefaultnamespace
violation[{"msg": msg}] {
input.review.object.metadata.namespace == "default"
msg := "Deployments to default namespace are not allowed"
}
EOF
# Create Constraint in dryrun mode
cat <<EOF | kubectl apply -f -
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sBlockDefaultNamespace
metadata:
name: block-default-namespace
spec:
enforcementAction: dryrun
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
EOF
echo "Policy created in dryrun mode"
else
echo "Gatekeeper not installed, skipping policy creation"
fi
# Cleanup
rm -f insecure.yaml secure.yaml

Success criteria: Understand kubesec scoring and Gatekeeper policy structure.


kubesec:

  • Security scoring tool
  • Negative score = critical issues
  • Positive score = security best practices
  • Use in CI/CD pipelines

OPA Gatekeeper:

  • Admission controller for policies
  • ConstraintTemplate + Constraint
  • Rego policy language
  • Audit mode for testing

Best Practices:

  • Scan manifests before deployment
  • Block privileged containers
  • Require resource limits
  • Test policies in audit mode first

Exam Tips:

  • Know kubesec command syntax
  • Understand Gatekeeper CRDs
  • Be able to write basic Rego

Module 5.4: Admission Controllers - Custom admission control for security.