Skip to content

Module 1.6: RBAC - Role-Based Access Control

Hands-On Lab Available
K8s Cluster intermediate 45 min
Launch Lab ↗

Opens in Killercoda in a new tab

Complexity: [MEDIUM] - Common exam topic

Time to Complete: 40-50 minutes

Prerequisites: Module 1.1 (Control Plane), understanding of namespaces


After this module, you will be able to:

  • Configure Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings for least-privilege access
  • Debug “forbidden” errors by tracing the RBAC chain (user → binding → role → permission)
  • Design an RBAC scheme for a multi-team cluster with namespace isolation
  • Audit existing RBAC rules to find overly permissive access (wildcard verbs, cluster-admin bindings)

In a real cluster, you don’t want everyone to have admin access. Developers should deploy their apps but not delete production namespaces. CI/CD systems should manage deployments but not read secrets. Monitoring tools should read metrics but not modify resources.

RBAC (Role-Based Access Control) solves this. It’s how Kubernetes answers: “Who can do what to which resources?”

The CKA exam regularly tests RBAC. You’ll be asked to create Roles, ClusterRoles, and bind them to users or ServiceAccounts. Get comfortable with these concepts—they’re essential for security and daily operations.

The Security Guard Analogy

Think of RBAC like a building’s security system. A Role is like an access badge type—“Developer Badge” can access floors 2-3, “Admin Badge” can access all floors. A RoleBinding is giving someone a specific badge—“Alice gets a Developer Badge.” The security system (API server) checks the badge before allowing entry to any floor (resource).


By the end of this module, you’ll be able to:

  • Understand RBAC concepts (Roles, ClusterRoles, Bindings)
  • Create Roles and ClusterRoles
  • Bind roles to users, groups, and ServiceAccounts
  • Test permissions with kubectl auth can-i
  • Debug RBAC issues

ResourceScopePurpose
RoleNamespaceGrants permissions within a namespace
ClusterRoleClusterGrants permissions cluster-wide
RoleBindingNamespaceBinds Role/ClusterRole to subjects in a namespace
ClusterRoleBindingClusterBinds ClusterRole to subjects cluster-wide
┌────────────────────────────────────────────────────────────────┐
│ RBAC Flow │
│ │
│ Subject Role Resources │
│ (Who?) (What permissions?) (Which things?) │
│ │
│ ┌─────────┐ ┌──────────────┐ ┌─────────────┐ │
│ │ User │ │ Role │ │ pods │ │
│ │ Alice │◄─────────►│ verbs: │─────►│ services │ │
│ └─────────┘ Bound │ - get │ │ secrets │ │
│ via │ - list │ └─────────────┘ │
│ ┌─────────┐ Binding │ - create │ │
│ │ Service │ └──────────────┘ │
│ │ Account │ │
│ └─────────┘ │
│ │
│ ┌─────────┐ │
│ │ Group │ │
│ └─────────┘ │
│ │
└────────────────────────────────────────────────────────────────┘
  • User: Human identity (managed outside Kubernetes)
  • Group: Collection of users
  • ServiceAccount: Identity for pods/applications
VerbDescription
getRead a single resource
listList resources (get all)
watchWatch for changes
createCreate new resources
updateModify existing resources
patchPartially modify resources
deleteDelete resources
deletecollectionDelete multiple resources

Common verb groups:

  • Read-only: get, list, watch
  • Read-write: get, list, watch, create, update, patch, delete
  • Full control: * (all verbs)

role-pod-reader.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: default
rules:
- apiGroups: [""] # "" = core API group (pods, services, etc.)
resources: ["pods"]
verbs: ["get", "list", "watch"]
Terminal window
# Apply the Role
kubectl apply -f role-pod-reader.yaml
# Or create imperatively
kubectl create role pod-reader \
--verb=get,list,watch \
--resource=pods \
-n default

2.2 Creating a ClusterRole (Cluster-Scoped)

Section titled “2.2 Creating a ClusterRole (Cluster-Scoped)”
clusterrole-node-reader.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-reader
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
Terminal window
# Apply
kubectl apply -f clusterrole-node-reader.yaml
# Or imperatively
kubectl create clusterrole node-reader \
--verb=get,list,watch \
--resource=nodes

Pause and predict: You create a Role with verbs: ["get", "list"] for resources: ["pods"] in namespace dev. Can the user with this Role see pods in the production namespace? What about cluster-scoped resources like Nodes?

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: developer
namespace: dev
rules:
# Pods: full access
- apiGroups: [""]
resources: ["pods", "pods/log", "pods/exec"]
verbs: ["*"]
# Deployments: full access
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["*"]
# Services: create and view
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "create", "delete"]
# ConfigMaps: read only
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list"]
# Secrets: no access (not listed = denied)
API GroupResources
"" (core)pods, services, configmaps, secrets, namespaces, nodes, persistentvolumes
appsdeployments, replicasets, statefulsets, daemonsets
batchjobs, cronjobs
networking.k8s.ionetworkpolicies, ingresses
rbac.authorization.k8s.ioroles, clusterroles, rolebindings, clusterrolebindings
storage.k8s.iostorageclasses, volumeattachments
Terminal window
# Find the API group for any resource
kubectl api-resources | grep deployment
# NAME SHORTNAMES APIVERSION NAMESPACED KIND
# deployments deploy apps/v1 true Deployment
# ^^^^
# API group is "apps"

Gotcha: Core API Group

The core API group is an empty string "". Resources like pods, services, configmaps use apiGroups: [""], not apiGroups: ["core"].


Part 3: RoleBindings and ClusterRoleBindings

Section titled “Part 3: RoleBindings and ClusterRoleBindings”

Binds a Role or ClusterRole to subjects within a namespace:

rolebinding-alice-pod-reader.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: alice-pod-reader
namespace: default
subjects:
- kind: User
name: alice
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
Terminal window
# Imperative command
kubectl create rolebinding alice-pod-reader \
--role=pod-reader \
--user=alice \
-n default

Binds a ClusterRole to subjects cluster-wide:

clusterrolebinding-bob-node-reader.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: bob-node-reader
subjects:
- kind: User
name: bob
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: node-reader
apiGroup: rbac.authorization.k8s.io
Terminal window
# Imperative command
kubectl create clusterrolebinding bob-node-reader \
--clusterrole=node-reader \
--user=bob
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-team-access
namespace: development
subjects:
# Bind to a user
- kind: User
name: alice
apiGroup: rbac.authorization.k8s.io
# Bind to a group
- kind: Group
name: developers
apiGroup: rbac.authorization.k8s.io
# Bind to a ServiceAccount
- kind: ServiceAccount
name: cicd-deployer
namespace: development
roleRef:
kind: Role
name: developer
apiGroup: rbac.authorization.k8s.io

Stop and think: You need to give a developer read-only access to pods in the staging namespace but not production. Would you use a Role or ClusterRole? Would you use a RoleBinding or ClusterRoleBinding? There’s more than one correct answer — think about which approach is most reusable.

A powerful pattern: define a ClusterRole once, bind it in specific namespaces:

# Use the built-in "edit" ClusterRole in the "production" namespace only
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: alice-edit-production
namespace: production
subjects:
- kind: User
name: alice
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole # Using ClusterRole
name: edit # Built-in ClusterRole
apiGroup: rbac.authorization.k8s.io
# Alice can edit resources in "production" namespace only

ServiceAccounts provide identity for pods. When a pod runs, it can use its ServiceAccount’s permissions to talk to the API server.

Terminal window
# List ServiceAccounts
kubectl get serviceaccounts
kubectl get sa
# Every namespace has a "default" ServiceAccount
kubectl get sa default -o yaml
Terminal window
# Create a ServiceAccount
kubectl create serviceaccount myapp-sa
# Or with YAML
cat > myapp-sa.yaml << 'EOF'
apiVersion: v1
kind: ServiceAccount
metadata:
name: myapp-sa
namespace: default
EOF
kubectl apply -f myapp-sa.yaml
Terminal window
# Create a Role
kubectl create role pod-reader \
--verb=get,list,watch \
--resource=pods
# Bind it to the ServiceAccount
kubectl create rolebinding myapp-pod-reader \
--role=pod-reader \
--serviceaccount=default:myapp-sa
# ^^^^^^^^^^^^^^^^^
# namespace:name format
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
serviceAccountName: myapp-sa # Use this ServiceAccount
containers:
- name: myapp
image: nginx

The pod now has the permissions granted to myapp-sa.

Did You Know?

By default, pods use the default ServiceAccount in their namespace. This account typically has no permissions. Always create dedicated ServiceAccounts with minimal required permissions.


Kubernetes comes with useful ClusterRoles:

ClusterRolePermissions
cluster-adminFull access to everything (superuser)
adminFull access within a namespace
editRead/write most resources, no RBAC
viewRead-only access to most resources
Terminal window
# See all built-in ClusterRoles
kubectl get clusterroles | grep -v "^system:"
# Inspect a ClusterRole
kubectl describe clusterrole edit
Terminal window
# Give alice admin access to namespace "myapp"
kubectl create rolebinding alice-admin \
--clusterrole=admin \
--user=alice \
-n myapp
# Give bob view access to namespace "production"
kubectl create rolebinding bob-view \
--clusterrole=view \
--user=bob \
-n production

What would happen if: You create two RoleBindings in the same namespace — one grants a user get on pods, the other grants delete on pods. Does the user get both permissions, or does one override the other? What if you wanted to explicitly deny delete?

Check if you (or someone else) can perform an action:

Terminal window
# Check your own permissions
kubectl auth can-i create pods
kubectl auth can-i delete deployments
kubectl auth can-i '*' '*' # Am I admin?
# Check in a specific namespace
kubectl auth can-i create pods -n production
# Check for another user (requires admin)
kubectl auth can-i create pods --as=alice
kubectl auth can-i delete nodes --as=bob
# Check for a ServiceAccount
kubectl auth can-i list secrets --as=system:serviceaccount:default:myapp-sa
Terminal window
# What can I do in this namespace?
kubectl auth can-i --list
# What can alice do?
kubectl auth can-i --list --as=alice
# What can a ServiceAccount do?
kubectl auth can-i --list --as=system:serviceaccount:default:myapp-sa
Terminal window
# Error: pods is forbidden
kubectl get pods
# Error: User "alice" cannot list resource "pods" in API group "" in namespace "default"
# Debug steps:
# 1. Check what permissions the user has
kubectl auth can-i --list --as=alice
# 2. Check what roles are bound to the user
kubectl get rolebindings -A -o wide | grep alice
kubectl get clusterrolebindings -o wide | grep alice
# 3. Check the role's rules
kubectl describe role <role-name> -n <namespace>
kubectl describe clusterrole <clusterrole-name>

War Story: The 403 Mystery

An engineer spent hours debugging why their CI/CD pipeline couldn’t deploy. kubectl auth can-i showed permissions were correct. The issue? The ServiceAccount was in namespace cicd, but the RoleBinding was in namespace production with a typo: namespace: prduction. One missing letter, hours of debugging. Always double-check namespaces in bindings.


Terminal window
# Create namespace
kubectl create namespace development
# Create ServiceAccount
kubectl create serviceaccount developer -n development
# Bind edit ClusterRole (read/write most resources)
kubectl create rolebinding developer-edit \
--clusterrole=edit \
--serviceaccount=development:developer \
-n development
Terminal window
# ServiceAccount for monitoring tools
kubectl create serviceaccount monitoring -n monitoring
# Cluster-wide read access
kubectl create clusterrolebinding monitoring-view \
--clusterrole=view \
--serviceaccount=monitoring:monitoring
Terminal window
# Create role for deployments only
kubectl create role deployer \
--verb=get,list,watch,create,update,patch,delete \
--resource=deployments,services,configmaps \
-n production
# Bind to CI/CD ServiceAccount
kubectl create rolebinding cicd-deployer \
--role=deployer \
--serviceaccount=cicd:pipeline \
-n production

Terminal window
# Task: Create a Role that can get, list, and watch pods and services in namespace "app"
kubectl create role app-reader \
--verb=get,list,watch \
--resource=pods,services \
-n app
# Task: Bind the role to user "john"
kubectl create rolebinding john-app-reader \
--role=app-reader \
--user=john \
-n app
# Verify
kubectl auth can-i get pods -n app --as=john
# yes
kubectl auth can-i delete pods -n app --as=john
# no
Terminal window
# Task: Create ServiceAccount "dashboard" that can list pods across all namespaces
kubectl create serviceaccount dashboard -n kube-system
kubectl create clusterrole pod-list \
--verb=list \
--resource=pods
kubectl create clusterrolebinding dashboard-pod-list \
--clusterrole=pod-list \
--serviceaccount=kube-system:dashboard

  • RBAC is additive. There’s no “deny” rule. If any Role grants a permission, it’s allowed. You can’t explicitly block access—you can only not grant it.

  • Aggregated ClusterRoles let you combine multiple ClusterRoles. The built-in admin, edit, and view roles are aggregated—additional rules can be added to them.

  • system: ClusterRoles* are for internal Kubernetes components. Don’t modify them unless you know what you’re doing.


MistakeProblemSolution
Wrong apiGroupRole doesn’t grant accessCheck kubectl api-resources for correct group
Missing namespace in bindingWrong permissionsAlways verify -n namespace
Forgetting ServiceAccount namespaceBinding doesn’t workUse namespace:name format
Using Role for cluster resourcesCan’t access nodes, PVsUse ClusterRole for cluster-scoped resources
Empty apiGroup not quotedYAML errorUse apiGroups: [""] with quotes
Missing create verb on exec/attach subresourceskubectl exec silently fails (K8s 1.35+)Add create verb to pods/exec, pods/attach, pods/portforward — see note below

K8s 1.35 Breaking Change: WebSocket Streaming RBAC

Starting in Kubernetes 1.35, kubectl exec, attach, and port-forward use WebSocket connections that require the create verb on the relevant subresource (e.g., pods/exec). Previously, only get was needed. Existing RBAC policies that grant get pods/exec will silently fail — commands hang or return permission errors. Audit your ClusterRoles and Roles:

# OLD (broken in 1.35+):
- resources: ["pods/exec"]
verbs: ["get"]
# FIXED:
- resources: ["pods/exec", "pods/attach", "pods/portforward"]
verbs: ["get", "create"]

  1. Your company has 5 development teams, each with their own namespace. All teams need the same set of permissions: read/write Deployments, Services, and ConfigMaps but no access to Secrets. You could create 5 separate Roles (one per namespace) or use a different approach. What’s the most maintainable way to set this up, and why?

    Answer Create a single ClusterRole with the desired permissions, then create a RoleBinding in each team's namespace that references that ClusterRole. When you bind a ClusterRole with a RoleBinding, the permissions are scoped to that namespace only. This way, if permissions need to change (e.g., adding `pods/log` access), you update one ClusterRole instead of five Roles. The commands would be: `kubectl create clusterrole team-developer --verb=get,list,watch,create,update,delete --resource=deployments,services,configmaps` followed by `kubectl create rolebinding team-dev --clusterrole=team-developer --group=team-alpha -n alpha-ns` for each namespace. This is the standard pattern used by the built-in `edit` and `view` ClusterRoles.
  2. A CI/CD pipeline ServiceAccount in the cicd namespace needs to deploy applications to the production namespace. The team creates a Role in production and a RoleBinding, but kubectl auth can-i create deployments -n production --as=system:serviceaccount:cicd:pipeline returns “no.” The Role and RoleBinding YAML look correct. What’s the most likely mistake?

    Answer The most likely mistake is in the RoleBinding's `subjects` section. When binding to a ServiceAccount from a different namespace, you must specify the ServiceAccount's namespace in the subject: `namespace: cicd`. A common error is omitting the namespace field or setting it to `production` (the RoleBinding's namespace) instead of `cicd` (where the ServiceAccount lives). The correct subject should be: `kind: ServiceAccount, name: pipeline, namespace: cicd`. Another possibility is the Role's `apiGroups` field — Deployments are in the `apps` group, not the core group. Check with `kubectl get rolebinding -n production -o yaml` and verify both the subject namespace and the role's apiGroups match `["apps"]` for deployments.
  3. During a security audit, you find a ClusterRoleBinding that grants cluster-admin to a ServiceAccount called monitoring in the monitoring namespace. The monitoring tool only needs to read pod metrics across all namespaces. Why is this dangerous, and what’s the least-privilege replacement?

    Answer `cluster-admin` grants unrestricted access to everything in the cluster — create, delete, and modify any resource in any namespace, including Secrets, RBAC rules, and even the ability to escalate its own privileges. If the monitoring pod is compromised, an attacker gains full cluster control. The least-privilege replacement is to create a ClusterRole with only the read permissions needed: `kubectl create clusterrole monitoring-reader --verb=get,list,watch --resource=pods,nodes,namespaces` and bind it with a ClusterRoleBinding. If the tool needs metrics specifically, add `pods/metrics` or `nodes/metrics` resources. The principle is: RBAC is additive (there's no "deny"), so grant only what's needed. Audit regularly with `kubectl auth can-i --list --as=system:serviceaccount:monitoring:monitoring` to verify permissions are minimal.
  4. A developer runs kubectl exec -it my-pod -- bash in the dev namespace and gets “forbidden.” You check their Role and see it grants ["get", "list", "watch"] on ["pods", "pods/exec"]. On a Kubernetes 1.34 cluster this worked fine, but after upgrading to 1.35 it broke. What changed and how do you fix it?

    Answer Starting in Kubernetes 1.35, `kubectl exec`, `attach`, and `port-forward` use WebSocket connections that require the `create` verb on the relevant subresource. Previously, only `get` was needed for `pods/exec`. The fix is to add the `create` verb to the Role rule for subresources: update the rule to `verbs: ["get", "create"]` for `resources: ["pods/exec", "pods/attach", "pods/portforward"]`. This is a breaking change that affects any existing RBAC policies relying on the old `get`-only pattern. Audit all Roles and ClusterRoles that reference `pods/exec` by running `kubectl get roles,clusterroles -A -o yaml | grep -B5 "pods/exec"` to find and update them all.

Task: Set up RBAC for a development team.

Steps:

  1. Create a namespace:
Terminal window
kubectl create namespace dev-team
  1. Create a ServiceAccount:
Terminal window
kubectl create serviceaccount dev-sa -n dev-team
  1. Create a Role for developers:
Terminal window
kubectl create role developer \
--verb=get,list,watch,create,update,delete \
--resource=pods,deployments,services,configmaps \
-n dev-team
  1. Bind the Role to the ServiceAccount:
Terminal window
kubectl create rolebinding dev-sa-developer \
--role=developer \
--serviceaccount=dev-team:dev-sa \
-n dev-team
  1. Test the permissions:
Terminal window
# Test as the ServiceAccount
kubectl auth can-i get pods -n dev-team \
--as=system:serviceaccount:dev-team:dev-sa
# yes
kubectl auth can-i delete pods -n dev-team \
--as=system:serviceaccount:dev-team:dev-sa
# yes
kubectl auth can-i get secrets -n dev-team \
--as=system:serviceaccount:dev-team:dev-sa
# no (we didn't grant access to secrets)
kubectl auth can-i get pods -n default \
--as=system:serviceaccount:dev-team:dev-sa
# no (role only applies in dev-team namespace)
  1. Create a pod using the ServiceAccount:
Terminal window
cat > dev-pod.yaml << 'EOF'
apiVersion: v1
kind: Pod
metadata:
name: dev-shell
namespace: dev-team
spec:
serviceAccountName: dev-sa
containers:
- name: shell
image: bitnami/kubectl
command: ["sleep", "infinity"]
EOF
kubectl apply -f dev-pod.yaml
  1. Test from inside the pod:
Terminal window
kubectl exec -it dev-shell -n dev-team -- /bin/bash
# Inside the pod:
kubectl get pods # Should work
kubectl get secrets # Should fail (forbidden)
kubectl get pods -n default # Should fail (forbidden)
exit
  1. Add read-only cluster access (bonus):
Terminal window
kubectl create clusterrolebinding dev-sa-view \
--clusterrole=view \
--serviceaccount=dev-team:dev-sa
# Now the ServiceAccount can read resources cluster-wide
kubectl auth can-i get pods -n default \
--as=system:serviceaccount:dev-team:dev-sa
# yes (but read-only)
  1. Cleanup:
Terminal window
kubectl delete namespace dev-team
kubectl delete clusterrolebinding dev-sa-view
rm dev-pod.yaml

Success Criteria:

  • Can create Roles and ClusterRoles
  • Can create RoleBindings and ClusterRoleBindings
  • Can bind to Users, Groups, and ServiceAccounts
  • Can test permissions with kubectl auth can-i
  • Understand namespace vs cluster scope

Drill 1: RBAC Speed Test (Target: 3 minutes)

Section titled “Drill 1: RBAC Speed Test (Target: 3 minutes)”

Create RBAC resources as fast as possible:

Terminal window
# Create namespace
kubectl create ns rbac-drill
# Create ServiceAccount
kubectl create sa drill-sa -n rbac-drill
# Create Role (read pods)
kubectl create role pod-reader --verb=get,list,watch --resource=pods -n rbac-drill
# Create RoleBinding
kubectl create rolebinding drill-binding --role=pod-reader --serviceaccount=rbac-drill:drill-sa -n rbac-drill
# Test
kubectl auth can-i get pods -n rbac-drill --as=system:serviceaccount:rbac-drill:drill-sa
# Cleanup
kubectl delete ns rbac-drill

Drill 2: Permission Testing (Target: 5 minutes)

Section titled “Drill 2: Permission Testing (Target: 5 minutes)”
Terminal window
kubectl create ns perm-test
kubectl create sa test-sa -n perm-test
# Create limited role
kubectl create role limited --verb=get,list --resource=pods,services -n perm-test
kubectl create rolebinding limited-binding --role=limited --serviceaccount=perm-test:test-sa -n perm-test
# Test various permissions
echo "=== Testing as test-sa ==="
kubectl auth can-i get pods -n perm-test --as=system:serviceaccount:perm-test:test-sa # yes
kubectl auth can-i create pods -n perm-test --as=system:serviceaccount:perm-test:test-sa # no
kubectl auth can-i get secrets -n perm-test --as=system:serviceaccount:perm-test:test-sa # no
kubectl auth can-i get pods -n default --as=system:serviceaccount:perm-test:test-sa # no
kubectl auth can-i get services -n perm-test --as=system:serviceaccount:perm-test:test-sa # yes
# Cleanup
kubectl delete ns perm-test

Drill 3: ClusterRole vs Role (Target: 5 minutes)

Section titled “Drill 3: ClusterRole vs Role (Target: 5 minutes)”
Terminal window
# Create namespaces
kubectl create ns ns-a
kubectl create ns ns-b
kubectl create sa cross-ns-sa -n ns-a
# Option 1: Role (namespace-scoped) - only works in ns-a
kubectl create role ns-a-reader --verb=get,list --resource=pods -n ns-a
kubectl create rolebinding ns-a-binding --role=ns-a-reader --serviceaccount=ns-a:cross-ns-sa -n ns-a
# Test
kubectl auth can-i get pods -n ns-a --as=system:serviceaccount:ns-a:cross-ns-sa # yes
kubectl auth can-i get pods -n ns-b --as=system:serviceaccount:ns-a:cross-ns-sa # no
# Option 2: ClusterRole + RoleBinding (still namespace-scoped binding)
kubectl create clusterrole pod-reader-cluster --verb=get,list --resource=pods
kubectl create rolebinding ns-b-binding -n ns-b --clusterrole=pod-reader-cluster --serviceaccount=ns-a:cross-ns-sa
# Now can read ns-b too
kubectl auth can-i get pods -n ns-b --as=system:serviceaccount:ns-a:cross-ns-sa # yes
# Cleanup
kubectl delete ns ns-a ns-b
kubectl delete clusterrole pod-reader-cluster

Drill 4: Troubleshooting - Permission Denied (Target: 5 minutes)

Section titled “Drill 4: Troubleshooting - Permission Denied (Target: 5 minutes)”
Terminal window
# Setup: Create SA with intentionally wrong binding
kubectl create ns debug-rbac
kubectl create sa debug-sa -n debug-rbac
kubectl create role secret-reader --verb=get,list --resource=secrets -n debug-rbac
# WRONG: binding role to different SA name
kubectl create rolebinding wrong-binding --role=secret-reader --serviceaccount=debug-rbac:other-sa -n debug-rbac
# User reports: "I can't read secrets!"
kubectl auth can-i get secrets -n debug-rbac --as=system:serviceaccount:debug-rbac:debug-sa
# no
# YOUR TASK: Diagnose and fix
Solution
Terminal window
# Check what the rolebinding references
kubectl get rolebinding wrong-binding -n debug-rbac -o yaml | grep -A5 subjects
# Shows: other-sa, not debug-sa
# Fix: Create correct binding
kubectl delete rolebinding wrong-binding -n debug-rbac
kubectl create rolebinding correct-binding --role=secret-reader --serviceaccount=debug-rbac:debug-sa -n debug-rbac
# Verify
kubectl auth can-i get secrets -n debug-rbac --as=system:serviceaccount:debug-rbac:debug-sa
# yes
# Cleanup
kubectl delete ns debug-rbac

Drill 5: Aggregate ClusterRoles (Target: 5 minutes)

Section titled “Drill 5: Aggregate ClusterRoles (Target: 5 minutes)”
Terminal window
# Create aggregated role
cat << 'EOF' | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: aggregate-reader
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list"]
EOF
# The built-in 'view' ClusterRole automatically includes rules from
# any ClusterRole with label aggregate-to-view: "true"
# Check what 'view' includes
kubectl get clusterrole view -o yaml | grep -A20 "rules:"
# Cleanup
kubectl delete clusterrole aggregate-reader

Drill 6: RBAC for User (Target: 5 minutes)

Section titled “Drill 6: RBAC for User (Target: 5 minutes)”
Terminal window
# Create role for hypothetical user "alice"
kubectl create ns alice-ns
kubectl create role alice-admin --verb='*' --resource='*' -n alice-ns
kubectl create rolebinding alice-is-admin --role=alice-admin --user=alice -n alice-ns
# Test as alice
kubectl auth can-i create deployments -n alice-ns --as=alice # yes
kubectl auth can-i delete pods -n alice-ns --as=alice # yes
kubectl auth can-i get secrets -n default --as=alice # no (different ns)
kubectl auth can-i create namespaces --as=alice # no (cluster scope)
# List what alice can do
kubectl auth can-i --list -n alice-ns --as=alice
# Cleanup
kubectl delete ns alice-ns

Drill 7: Challenge - Least Privilege Setup

Section titled “Drill 7: Challenge - Least Privilege Setup”

Create RBAC for a “deployment-manager” that can:

  • Create, update, delete Deployments in namespace app
  • View (but not modify) Services in namespace app
  • View Pods in any namespace (read-only cluster-wide)
Terminal window
kubectl create ns app
# YOUR TASK: Create the necessary Role, ClusterRole, and bindings
Solution
Terminal window
# Role for deployment management in 'app' namespace
kubectl create role deployment-manager \
--verb=create,update,delete,get,list,watch \
--resource=deployments \
-n app
# Role for service viewing in 'app' namespace
kubectl create role service-viewer \
--verb=get,list,watch \
--resource=services \
-n app
# ClusterRole for cluster-wide pod viewing
kubectl create clusterrole pod-viewer \
--verb=get,list,watch \
--resource=pods
# Create ServiceAccount
kubectl create sa deployment-manager -n app
# Bind all roles
kubectl create rolebinding dm-deployments \
--role=deployment-manager \
--serviceaccount=app:deployment-manager \
-n app
kubectl create rolebinding dm-services \
--role=service-viewer \
--serviceaccount=app:deployment-manager \
-n app
kubectl create clusterrolebinding dm-pods \
--clusterrole=pod-viewer \
--serviceaccount=app:deployment-manager
# Test
kubectl auth can-i create deployments -n app --as=system:serviceaccount:app:deployment-manager # yes
kubectl auth can-i delete services -n app --as=system:serviceaccount:app:deployment-manager # no
kubectl auth can-i get pods -n default --as=system:serviceaccount:app:deployment-manager # yes
# Cleanup
kubectl delete ns app
kubectl delete clusterrole pod-viewer
kubectl delete clusterrolebinding dm-pods

Module 1.7: kubeadm Basics - Cluster bootstrap and node management.