Skip to content

Module 2.1: RBAC Deep Dive

Hands-On Lab Available
K8s Cluster advanced 40 min
Launch Lab ↗

Opens in Killercoda in a new tab

Complexity: [MEDIUM] - Core security skill

Time to Complete: 45-50 minutes

Prerequisites: CKA RBAC knowledge, ServiceAccounts basics


After completing this module, you will be able to:

  1. Audit RBAC configurations to identify over-permissioned roles and privilege escalation paths
  2. Implement least-privilege Roles and ClusterRoles for specific workload requirements
  3. Trace effective permissions for any user or ServiceAccount through RoleBindings
  4. Diagnose RBAC-related access denials and fix them without granting excessive permissions

RBAC is the access control mechanism for Kubernetes. CKA taught you to create Roles and RoleBindings. CKS goes deeper: you must audit RBAC for over-permissioned accounts, understand escalation paths, and implement least privilege.

Misconfigured RBAC is a top Kubernetes vulnerability.


┌─────────────────────────────────────────────────────────────┐
│ RBAC COMPONENTS │
├─────────────────────────────────────────────────────────────┤
│ │
│ Role/ClusterRole │
│ └── Defines WHAT actions are allowed │
│ ├── apiGroups: ["", "apps", "batch"] │
│ ├── resources: ["pods", "deployments"] │
│ └── verbs: ["get", "list", "create", "delete"] │
│ │
│ RoleBinding/ClusterRoleBinding │
│ └── Defines WHO gets the permissions │
│ ├── subjects: [users, groups, serviceaccounts] │
│ └── roleRef: [Role or ClusterRole] │
│ │
│ Scope: │
│ ├── Role + RoleBinding = namespace-scoped │
│ ├── ClusterRole + ClusterRoleBinding = cluster-wide │
│ └── ClusterRole + RoleBinding = reusable in namespace │
│ │
└─────────────────────────────────────────────────────────────┘

# DANGEROUS: Allows everything
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: too-permissive
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
# WHY IT'S BAD:
# - Equivalent to cluster-admin
# - Can access secrets, modify RBAC, delete anything
# - Violates least privilege
# DANGEROUS: Can read all secrets
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: secret-reader
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
# WHY IT'S BAD:
# - Secrets contain passwords, tokens, certificates
# - One secret can compromise entire applications
# - Should be tightly scoped to specific secrets
# DANGEROUS: Can modify RBAC
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: rbac-modifier
rules:
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["clusterroles", "clusterrolebindings"]
verbs: ["create", "update", "patch"]
# WHY IT'S BAD:
# - Can grant themselves cluster-admin
# - Privilege escalation attack
# - Only admins should modify RBAC
# DANGEROUS: Can create pods (potential escalation)
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-creator
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create"]
# WHY IT'S BAD:
# - Can create privileged pods
# - Can mount service account tokens
# - Can escape container to node
# - Needs Pod Security to be safe

Stop and think: A developer has a Role that allows create on pods but nothing else. They claim they can’t do anything dangerous. But what if they create a pod with serviceAccountName: cluster-admin-sa and automountServiceAccountToken: true? How does pod creation become a privilege escalation vector?

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-viewer
namespace: production
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: specific-configmap-reader
namespace: app
rules:
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["app-config", "feature-flags"] # Only these!
verbs: ["get"]
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-executor
namespace: debug
rules:
- apiGroups: [""]
resources: ["pods/exec"] # Only exec, not full pod access
verbs: ["create"]

Terminal window
# List all ClusterRoles with wildcard permissions
kubectl get clusterroles -o json | jq -r '
.items[] |
select(.rules[]? |
(.verbs[]? == "*") or
(.resources[]? == "*") or
(.apiGroups[]? == "*")
) | .metadata.name'
# Find roles that can read secrets
kubectl get clusterroles -o json | jq -r '
.items[] |
select(.rules[]? |
(.resources[]? | contains("secrets")) and
((.verbs[]? == "get") or (.verbs[]? == "*"))
) | .metadata.name'
# Find roles that can modify RBAC
kubectl get clusterroles -o json | jq -r '
.items[] |
select(.rules[]? |
(.apiGroups[]? == "rbac.authorization.k8s.io") and
((.verbs[]? == "create") or (.verbs[]? == "update") or (.verbs[]? == "*"))
) | .metadata.name'
Terminal window
# What can a specific user do?
kubectl auth can-i --list --as=system:serviceaccount:default:myapp
# Can user create privileged pods?
kubectl auth can-i create pods --as=developer
# Can user read secrets?
kubectl auth can-i get secrets --as=system:serviceaccount:app:backend
# In specific namespace
kubectl auth can-i delete deployments -n production --as=developer
Terminal window
# Who has cluster-admin?
kubectl get clusterrolebindings -o json | jq -r '
.items[] |
select(.roleRef.name == "cluster-admin") |
"\(.metadata.name): \(.subjects[]?.name // "unknown")"'
# List all ClusterRoleBindings
kubectl get clusterrolebindings -o wide
# Describe suspicious binding
kubectl describe clusterrolebinding suspicious-binding

┌─────────────────────────────────────────────────────────────┐
│ RBAC ESCALATION PATHS │
├─────────────────────────────────────────────────────────────┤
│ │
│ Direct Escalation: │
│ ───────────────────────────────────────────────────────── │
│ 1. Create/update ClusterRoleBindings │
│ → Bind self to cluster-admin │
│ │
│ 2. Create/update ClusterRoles │
│ → Add * permissions │
│ │
│ Indirect Escalation: │
│ ───────────────────────────────────────────────────────── │
│ 3. Create pods in any namespace │
│ → Mount privileged ServiceAccount │
│ │
│ 4. Create pods with node access │
│ → Read kubelet credentials │
│ │
│ 5. Impersonate users │
│ → Act as cluster-admin │
│ │
│ Prevention: │
│ ───────────────────────────────────────────────────────── │
│ • Never give RBAC modification rights loosely │
│ • Use Pod Security Admission │
│ • Audit escalation verbs regularly │
│ │
└─────────────────────────────────────────────────────────────┘

What would happen if: You find a ClusterRoleBinding that grants cluster-admin to a ServiceAccount called monitoring-agent in the monitoring namespace. The monitoring team says they need it to “see everything.” What’s the risk if an attacker compromises a pod running as that ServiceAccount?

# The 'bind' verb allows creating bindings to roles
# even without permissions the role grants
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["clusterrolebindings"]
verbs: ["create"] # Plus...
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["clusterroles"]
verbs: ["bind"] # ...this allows binding to any role!
# The 'escalate' verb allows granting permissions
# that the user doesn't have
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["clusterroles"]
verbs: ["escalate"] # Can add any permissions to roles!

┌─────────────────────────────────────────────────────────────┐
│ RBAC BEST PRACTICES │
├─────────────────────────────────────────────────────────────┤
│ │
│ 1. Least Privilege │
│ └── Only grant what's needed │
│ └── Prefer Roles over ClusterRoles │
│ └── Use resourceNames when possible │
│ │
│ 2. No Wildcards │
│ └── Never use "*" in production │
│ └── List specific resources and verbs │
│ │
│ 3. Audit Regularly │
│ └── Review cluster-admin bindings │
│ └── Check for secret access │
│ └── Monitor RBAC changes │
│ │
│ 4. Namespace Isolation │
│ └── One ServiceAccount per application │
│ └── Roles scoped to namespace │
│ │
│ 5. Protect RBAC Resources │
│ └── Only cluster admins modify RBAC │
│ └── Audit bind/escalate verbs │
│ │
└─────────────────────────────────────────────────────────────┘

Terminal window
# Given: ServiceAccount with too many permissions
# Task: Reduce to only get/list pods
# Check current permissions
kubectl auth can-i --list --as=system:serviceaccount:app:backend -n app
# Find the rolebinding
kubectl get rolebindings -n app -o wide
# Check the role
kubectl get role backend-role -n app -o yaml
# Create restricted role
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: backend-role
namespace: app
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
EOF
# Verify
kubectl auth can-i delete pods --as=system:serviceaccount:app:backend -n app
# Should return "no"

Scenario 2: Find and Remove Dangerous Binding

Section titled “Scenario 2: Find and Remove Dangerous Binding”
Terminal window
# Find who has cluster-admin
kubectl get clusterrolebindings -o json | jq -r '
.items[] |
select(.roleRef.name == "cluster-admin") |
.metadata.name'
# Remove inappropriate binding
kubectl delete clusterrolebinding developer-admin
Terminal window
# Requirement: App needs to read configmaps and create events
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: app-role
namespace: myapp
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-binding
namespace: myapp
subjects:
- kind: ServiceAccount
name: myapp-sa
namespace: myapp
roleRef:
kind: Role
name: app-role
apiGroup: rbac.authorization.k8s.io
EOF

Pause and predict: You run kubectl auth can-i --list --as=system:serviceaccount:default:default and see permissions to get, list, and watch secrets cluster-wide. You didn’t create any RoleBindings for the default ServiceAccount. Where are these permissions coming from?

Terminal window
# Test as specific user
kubectl auth can-i create pods --as=jane
# Test as ServiceAccount
kubectl auth can-i get secrets --as=system:serviceaccount:default:myapp
# List all permissions
kubectl auth can-i --list --as=jane
# Why can/can't user do something?
kubectl auth can-i create pods --as=jane -v=5
# Check who can do something
kubectl auth who-can create pods
kubectl auth who-can delete secrets -n production

  • Kubernetes doesn’t have a ‘deny’ rule. RBAC is purely additive—you can only grant permissions, not explicitly deny them. To restrict access, simply don’t grant it.

  • The ‘system:masters’ group is hardcoded to have cluster-admin. You can’t remove it via RBAC. If someone is in this group, they have full access.

  • ‘escalate’ and ‘bind’ verbs were added specifically to prevent privilege escalation. Before Kubernetes 1.12, anyone who could create RoleBindings could bind to cluster-admin!

  • Aggregated ClusterRoles (like admin, edit, view) automatically include rules from other roles labeled with the aggregation label. This is how CRDs extend built-in roles.


MistakeWhy It HurtsSolution
Giving cluster-admin to developersFull access to everythingUse edit or custom roles
Using ClusterRoles when Role worksExcessive scopePrefer namespace-scoped
Wildcards in productionNo access controlList specific permissions
Not auditing bindingsUnknown who has accessRegular RBAC reviews
Ignoring ServiceAccount defaultsDefault SA may have permissionsDisable auto-mount, use specific SA

  1. During a security audit, you discover a ClusterRole with apiGroups: ["*"], resources: ["*"], verbs: ["*"] bound to a ServiceAccount called deploy-bot in the ci-cd namespace. The CI/CD team says they need broad access to deploy applications. How do you reduce the risk while keeping their pipeline functional?

    Answer A wildcard ClusterRole is effectively `cluster-admin` -- if the CI/CD pipeline is compromised, an attacker controls the entire cluster. Replace it with a scoped Role (not ClusterRole) in the target namespaces, granting only the specific resources and verbs the pipeline needs: typically `create`, `update`, `patch` on `deployments`, `services`, `configmaps`, and `secrets` in specific namespaces. Use `resourceNames` where possible. The pipeline should never need access to RBAC resources, nodes, or cluster-wide secrets. Audit with `kubectl auth can-i --list` before and after to verify the reduction.
  2. A penetration tester reports they escalated from a compromised application pod to cluster-admin. The pod’s ServiceAccount only had get and list on pods. Investigation reveals the SA also had create on pods in a namespace where a ServiceAccount with cluster-admin binding existed. Explain the escalation path.

    Answer The attacker created a new pod with `serviceAccountName` set to the cluster-admin ServiceAccount and `automountServiceAccountToken: true`. When the pod started, the cluster-admin token was mounted at `/var/run/secrets/kubernetes.io/serviceaccount/token`. The attacker exec'd into the pod and used the token to call the API with full cluster-admin privileges. This is why pod creation is a dangerous permission -- it's an indirect escalation path. Prevention requires Pod Security Admission to restrict ServiceAccount usage, and the `bind` verb should be tightly controlled.
  3. Your SOC team detects unusual API calls: someone is listing secrets across all namespaces using a ServiceAccount from the monitoring namespace. The monitoring team says their tools only need pod metrics. How do you trace the source of these permissions and fix it?

    Answer Trace the permissions: run `kubectl get clusterrolebindings -o json | jq '.items[] | select(.subjects[]?.name == "")' ` to find which ClusterRoleBinding grants the access. Then inspect the referenced ClusterRole with `kubectl get clusterrole -o yaml`. Likely someone bound the SA to `view` or `cluster-admin` instead of creating a custom role. Fix by deleting the overpermissive binding and creating a new Role with only `get` and `list` on `pods` and `pods/metrics` in the namespaces the monitoring tool actually needs. Verify with `kubectl auth can-i get secrets --as=system:serviceaccount:monitoring:` -- it should return "no."
  4. A junior admin creates a ClusterRole that includes verbs: ["create"] on clusterrolebindings in the rbac.authorization.k8s.io API group and assigns it to a developer. The admin says “it’s fine, they can only create bindings, not modify existing ones.” Why is this a critical security misconfiguration?

    Answer The ability to create ClusterRoleBindings is one of the most dangerous permissions in Kubernetes. The developer can create a new ClusterRoleBinding that binds themselves (or any ServiceAccount) to the `cluster-admin` ClusterRole, instantly gaining full cluster control. Kubernetes prevents this with the `bind` verb -- but if the developer also has `bind` on ClusterRoles, the escalation is trivial. Even without `bind`, creating RoleBindings to existing powerful roles is dangerous. Only cluster administrators should ever have write access to RBAC resources. Audit `escalate` and `bind` verbs regularly.

Task: Audit and fix overpermissive RBAC.

Terminal window
# Setup: Create overpermissive configuration
kubectl create namespace rbac-test
kubectl create serviceaccount admin-app -n rbac-test
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: overpermissive
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-app-binding
subjects:
- kind: ServiceAccount
name: admin-app
namespace: rbac-test
roleRef:
kind: ClusterRole
name: overpermissive
apiGroup: rbac.authorization.k8s.io
EOF
# Task 1: Audit the permissions
kubectl auth can-i --list --as=system:serviceaccount:rbac-test:admin-app
# Task 2: Check if it can read secrets (it shouldn't!)
kubectl auth can-i get secrets --as=system:serviceaccount:rbac-test:admin-app
# Task 3: Create a restricted role (only pods in namespace)
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-manager
namespace: rbac-test
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "create", "delete"]
EOF
# Task 4: Replace the ClusterRoleBinding with RoleBinding
kubectl delete clusterrolebinding admin-app-binding
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: admin-app-binding
namespace: rbac-test
subjects:
- kind: ServiceAccount
name: admin-app
namespace: rbac-test
roleRef:
kind: Role
name: pod-manager
apiGroup: rbac.authorization.k8s.io
EOF
# Task 5: Verify permissions are now restricted
kubectl auth can-i get secrets --as=system:serviceaccount:rbac-test:admin-app
# Should return "no"
kubectl auth can-i get pods --as=system:serviceaccount:rbac-test:admin-app -n rbac-test
# Should return "yes"
kubectl auth can-i get pods --as=system:serviceaccount:rbac-test:admin-app -n default
# Should return "no" (namespace-scoped)
# Cleanup
kubectl delete namespace rbac-test
kubectl delete clusterrole overpermissive

Success criteria: ServiceAccount can only manage pods in its own namespace.


RBAC Security Principles:

  • Least privilege always
  • No wildcards in production
  • Prefer Role over ClusterRole
  • Use resourceNames when possible

Dangerous Patterns:

  • Wildcard permissions (*, *)
  • Secrets access without need
  • RBAC modification rights
  • bind/escalate verbs

Auditing Commands:

  • kubectl auth can-i --list --as=...
  • kubectl auth who-can <verb> <resource>
  • Check ClusterRoleBindings to cluster-admin

Exam Tips:

  • Know how to reduce permissions
  • Practice finding overpermissive roles
  • Understand escalation paths

Module 2.2: ServiceAccount Security - Hardening ServiceAccounts and token management.