Skip to content

Module 1.7: kubeadm Basics - Cluster Bootstrap

Hands-On Lab Available
K8s Cluster intermediate 40 min
Launch Lab ↗

Opens in Killercoda in a new tab

Complexity: [MEDIUM] - Essential cluster management

Time to Complete: 35-45 minutes

Prerequisites: Module 1.1 (Control Plane), Module 1.2 (Extension Interfaces)


After this module, you will be able to:

  • Upgrade a kubeadm cluster safely (control plane first, then workers, one at a time)
  • Backup and restore etcd snapshots for disaster recovery
  • Manage kubeadm certificates (check expiry, renew, rotate)
  • Troubleshoot kubeadm upgrade failures by reading component logs and checking version compatibility

kubeadm is the official tool for creating Kubernetes clusters. The CKA exam environment uses kubeadm-based clusters, and you’ll need to understand how they work.

While the 2025 curriculum deprioritizes cluster upgrades, you still need to know:

  • How clusters are bootstrapped
  • How to join nodes
  • Where control plane components live
  • Basic cluster maintenance tasks

Understanding kubeadm helps you troubleshoot cluster issues and understand what’s happening under the hood.

The Construction Blueprint Analogy

Think of kubeadm like a construction foreman with blueprints. When you say “init,” it follows the blueprints to build the control plane—laying the foundation (certificates), erecting the framework (static pods), and connecting utilities (networking). When workers (nodes) arrive, it gives them instructions to join the team. The foreman doesn’t build the house alone; it orchestrates the process.


By the end of this module, you’ll be able to:

  • Understand what kubeadm does during cluster creation
  • Bootstrap a control plane node
  • Join worker nodes to a cluster
  • Understand static pods and manifests
  • Perform basic node management

kubeadm automates cluster setup:

┌────────────────────────────────────────────────────────────────┐
│ kubeadm init Process │
│ │
│ 1. Pre-flight Checks │
│ └── Verify system requirements (CPU, memory, ports) │
│ │
│ 2. Generate Certificates │
│ └── CA, API server, kubelet, etcd certificates │
│ └── Stored in /etc/kubernetes/pki/ │
│ │
│ 3. Generate kubeconfig Files │
│ └── admin.conf, kubelet.conf, controller-manager.conf │
│ └── Stored in /etc/kubernetes/ │
│ │
│ 4. Generate Static Pod Manifests │
│ └── API server, controller-manager, scheduler, etcd │
│ └── Stored in /etc/kubernetes/manifests/ │
│ │
│ 5. Start kubelet │
│ └── kubelet reads manifests and starts control plane │
│ │
│ 6. Apply Cluster Configuration │
│ └── CoreDNS, kube-proxy DaemonSet │
│ │
│ 7. Generate Join Token │
│ └── For worker nodes to join │
│ │
└────────────────────────────────────────────────────────────────┘
  • Install container runtime (containerd) - you do this first
  • Install kubelet/kubectl - you install these first
  • Install CNI plugin - you apply this after init
  • Configure load balancers - that’s your infrastructure
  • Set up HA - requires additional configuration

Before running kubeadm:

Terminal window
# Required on ALL nodes:
# 1. Container runtime (containerd)
# 2. kubelet
# 3. kubeadm
# 4. kubectl (at least on control plane)
# 5. Swap disabled
# 6. Required ports open
# 7. Unique hostname, MAC, product_uuid

Terminal window
# Initialize control plane
sudo kubeadm init
# With specific pod network CIDR (required by some CNIs)
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
# With specific API server address (for HA or custom networking)
sudo kubeadm init --apiserver-advertise-address=192.168.1.10
# With specific Kubernetes version
sudo kubeadm init --kubernetes-version=v1.35.0

Pause and predict: After running kubeadm init, you immediately try kubectl get nodes as a regular user and it fails. Why can’t you use kubectl yet, even though the cluster is initialized?

Terminal window
# For regular user (recommended)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# For root user
export KUBECONFIG=/etc/kubernetes/admin.conf
Terminal window
# Without CNI, pods won't get IPs and CoreDNS won't start
# Calico
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/calico.yaml
# Flannel
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
# Cilium
cilium install
Terminal window
# Check nodes
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# control-plane Ready control-plane 5m v1.35.0
# Check system pods
kubectl get pods -n kube-system
# Should see: coredns, etcd, kube-apiserver, kube-controller-manager,
# kube-proxy, kube-scheduler, CNI pods

Did You Know?

The kubeadm init output includes a kubeadm join command with a token. This token expires in 24 hours by default. Save it, or you’ll need to generate a new one.


After kubeadm init, you get a join command:

Terminal window
# Example output from kubeadm init
kubeadm join 192.168.1.10:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:abc123...

Run this on worker nodes:

Terminal window
# On worker node (as root)
sudo kubeadm join 192.168.1.10:6443 \
--token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:abc123...

If the token expired:

Terminal window
# On control plane - create new token
kubeadm token create --print-join-command
# Or manually:
# 1. Create token
kubeadm token create
# 2. Get CA cert hash
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | \
openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
# 3. Construct join command
kubeadm join <control-plane-ip>:6443 --token <new-token> \
--discovery-token-ca-cert-hash sha256:<hash>
Terminal window
# List existing tokens
kubeadm token list
# Delete a token
kubeadm token delete <token>
# Create token with custom TTL
kubeadm token create --ttl 2h

Static pods are managed directly by kubelet, not the API server. Control plane components run as static pods in kubeadm clusters.

Terminal window
# Static pod manifests location
ls /etc/kubernetes/manifests/
# etcd.yaml
# kube-apiserver.yaml
# kube-controller-manager.yaml
# kube-scheduler.yaml

What would happen if: You run kubectl delete pod kube-apiserver-controlplane -n kube-system. Does the API server go down? Does the pod come back? Who recreates it — the Deployment controller, the ReplicaSet controller, or something else entirely?

┌────────────────────────────────────────────────────────────────┐
│ Static Pod Lifecycle │
│ │
│ /etc/kubernetes/manifests/ │
│ │ │
│ │ kubelet watches this directory │
│ ▼ │
│ ┌─────────────┐ │
│ │ kubelet │ │
│ └──────┬──────┘ │
│ │ │
│ │ For each YAML file: │
│ │ 1. Start container │
│ │ 2. Keep it running │
│ │ 3. Restart if it crashes │
│ │ 4. Create mirror pod in API server │
│ ▼ │
│ ┌─────────────────────────────────────────┐ │
│ │ Control Plane Containers │ │
│ │ • kube-apiserver │ │
│ │ • kube-controller-manager │ │
│ │ • kube-scheduler │ │
│ │ • etcd │ │
│ └─────────────────────────────────────────┘ │
│ │
└────────────────────────────────────────────────────────────────┘
Terminal window
# View static pod manifests
sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml
# Modify a static pod (edit the manifest)
sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
# kubelet automatically restarts the pod
# "Delete" a static pod (remove the manifest)
sudo mv /etc/kubernetes/manifests/kube-scheduler.yaml /tmp/
# kubelet stops the pod
# Restore it
sudo mv /tmp/kube-scheduler.yaml /etc/kubernetes/manifests/
# kubelet starts the pod again

Gotcha: kubectl delete Won’t Work

You can’t delete static pods with kubectl delete pod. They’re recreated immediately because kubelet manages them. To stop a static pod, remove or rename its manifest file.


/etc/kubernetes/
├── admin.conf # kubectl config for admin
├── controller-manager.conf # kubeconfig for controller-manager
├── kubelet.conf # kubeconfig for kubelet
├── scheduler.conf # kubeconfig for scheduler
├── manifests/ # Static pod definitions
│ ├── etcd.yaml
│ ├── kube-apiserver.yaml
│ ├── kube-controller-manager.yaml
│ └── kube-scheduler.yaml
└── pki/ # Certificates
├── ca.crt # Cluster CA
├── ca.key
├── apiserver.crt # API server cert
├── apiserver.key
├── apiserver-kubelet-client.crt
├── front-proxy-ca.crt
├── sa.key # ServiceAccount signing key
├── sa.pub
└── etcd/ # etcd certificates
├── ca.crt
└── ...
Terminal window
# Cluster CA
/etc/kubernetes/pki/ca.crt
/etc/kubernetes/pki/ca.key
# API Server
/etc/kubernetes/pki/apiserver.crt
/etc/kubernetes/pki/apiserver.key
# etcd CA (separate CA)
/etc/kubernetes/pki/etcd/ca.crt
/etc/kubernetes/pki/etcd/ca.key
# Check certificate expiration
kubeadm certs check-expiration

Terminal window
# List nodes
kubectl get nodes
# Detailed info
kubectl get nodes -o wide
# Node details
kubectl describe node <node-name>

Stop and think: What’s the difference between kubectl cordon and kubectl drain? If you only cordon a node before maintenance, what risk remains that drain would have handled?

Before maintenance, drain the node to safely evict pods:

Terminal window
# Drain node (evict pods, mark unschedulable)
kubectl drain <node-name> --ignore-daemonsets
# If there are pods with local storage:
kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data
# Force (for pods without controllers):
kubectl drain <node-name> --ignore-daemonsets --force
Terminal window
# Mark node unschedulable (no new pods)
kubectl cordon <node-name>
# Mark node schedulable again
kubectl uncordon <node-name>
# Check node status
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# node1 Ready worker 10d v1.35.0
# node2 Ready,SchedulingDisabled worker 10d v1.35.0 # cordoned
Terminal window
# 1. Drain the node first
kubectl drain <node-name> --ignore-daemonsets --force
# 2. Delete from cluster
kubectl delete node <node-name>
# 3. On the node itself, reset kubeadm
sudo kubeadm reset
# 4. Clean up
sudo rm -rf /etc/kubernetes/
sudo rm -rf /var/lib/kubelet/
sudo rm -rf /var/lib/etcd/

Use kubeadm reset to:

  • Remove a node from the cluster
  • Start fresh after failed init
  • Completely tear down a cluster
Terminal window
# On the node to reset
sudo kubeadm reset
# This does:
# 1. Stops kubelet
# 2. Removes /etc/kubernetes/
# 3. Removes cluster state from etcd (if control plane)
# 4. Removes certificates
# 5. Cleans up iptables rules
# Additional cleanup you should do:
sudo rm -rf /etc/cni/net.d/
sudo rm -rf $HOME/.kube/config
sudo iptables -F && sudo iptables -t nat -F

Terminal window
# Check kubelet status
systemctl status kubelet
# Check kubelet logs
journalctl -u kubelet -f
# Common issues:
# - Swap not disabled
# - Container runtime not running
# - Wrong container runtime socket
Terminal window
# Check container runtime
crictl ps
# Check static pod containers
crictl logs <container-id>
# Look for API server errors
sudo cat /var/log/pods/kube-system_kube-apiserver-*/kube-apiserver/*.log
Terminal window
# On the node, check logs
journalctl -u kubelet | tail -50
# Common issues:
# - Token expired
# - Wrong CA hash
# - Network connectivity to control plane
# - Firewall blocking port 6443
Terminal window
# Check expiration
kubeadm certs check-expiration
# Renew all certificates
kubeadm certs renew all
# Restart control plane components
# (just move manifests and wait)

Terminal window
# Initialize cluster
kubeadm init --pod-network-cidr=10.244.0.0/16
# Get join command
kubeadm token create --print-join-command
# Join worker
kubeadm join <control-plane>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
# Check certificates
kubeadm certs check-expiration
# Drain node for maintenance
kubectl drain <node> --ignore-daemonsets
# Make node schedulable again
kubectl uncordon <node>
# Reset node
kubeadm reset

  • kubeadm doesn’t manage kubelet. kubelet runs as a systemd service. kubeadm generates the config, but systemctl manages the service.

  • Static pods have mirror pods. The API server shows “mirror” pods for static pods so you can see them with kubectl. But you can’t manage them through the API.

  • HA control planes require external load balancers. kubeadm can init additional control plane nodes, but you need to set up load balancing yourself.


MistakeProblemSolution
Running init with swap enabledInit failsswapoff -a and remove from /etc/fstab
Forgetting CNI after initPods stay PendingInstall CNI immediately after init
Token expiredCan’t join nodeskubeadm token create --print-join-command
Using kubectl delete on static podsPods keep coming backEdit/remove manifests in /etc/kubernetes/manifests/
Not draining before maintenancePod disruptionAlways kubectl drain first

  1. It’s 2 AM and your monitoring alerts that the API server certificate expires in 12 hours. You SSH into the control plane node. What commands do you run to check the certificate status and renew it, and what must happen after renewal for the new certificate to take effect?

    Answer First, verify the expiration: `kubeadm certs check-expiration` shows all certificate expiry dates. Then renew: `kubeadm certs renew all` regenerates all certificates (or `kubeadm certs renew apiserver` for just the API server cert). After renewal, the control plane static pods must restart to load the new certificates. Since they're managed by kubelet via manifests in `/etc/kubernetes/manifests/`, you can trigger a restart by temporarily moving and restoring the manifests, or simply restarting kubelet with `systemctl restart kubelet`. Verify the new certificate: `openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout | grep "Not After"`. Also update your kubeconfig if the admin certificate was renewed: copy the new `/etc/kubernetes/admin.conf` to `$HOME/.kube/config`.
  2. A new worker node was set up last week, but the engineer who did it left the company and didn’t document the join command. The original bootstrap token has expired. How do you generate a new join command, and what two pieces of information does the worker node need to securely join the cluster?

    Answer On the control plane, run `kubeadm token create --print-join-command`. This generates a complete join command with both required pieces: (1) a bootstrap token for initial authentication — this is a short-lived shared secret that proves the node is authorized to join, and (2) a CA certificate hash (`--discovery-token-ca-cert-hash`) that the joining node uses to verify it's connecting to the legitimate API server, preventing man-in-the-middle attacks. The token expires in 24 hours by default (configurable with `--ttl`). You can list existing tokens with `kubeadm token list` and delete old ones with `kubeadm token delete `. The CA cert hash doesn't change unless you rotate the cluster CA.
  3. You need to perform kernel maintenance on a worker node running production pods managed by Deployments. A junior admin suggests just rebooting the node. What’s the correct procedure, and what could go wrong if you skip the drain step?

    Answer The correct procedure is: (1) `kubectl cordon ` to prevent new pods from being scheduled, (2) `kubectl drain --ignore-daemonsets --delete-emptydir-data` to gracefully evict all pods — the Deployment controllers will recreate them on other nodes, (3) perform maintenance and reboot, (4) `kubectl uncordon ` to allow scheduling again. If you skip drain and just reboot, all pods on that node die abruptly without graceful shutdown. Pods with long-running requests or in-flight transactions will be interrupted. While Deployments will eventually recreate pods elsewhere (after the node's kubelet stops reporting and the node controller marks it NotReady, which takes ~5 minutes by default), there's an unnecessary outage window. Pods with `PodDisruptionBudgets` won't be respected either, potentially violating availability guarantees.
  4. You added a custom flag to /etc/kubernetes/manifests/kube-apiserver.yaml but made a YAML syntax error. Now kubectl commands hang and return connection refused. You can’t use kubectl to diagnose the problem. How do you investigate and fix this?

    Answer Since the API server is down, kubectl is useless — you must troubleshoot directly on the control plane node. SSH in and check: (1) `crictl ps` to see if the API server container is running or crash-looping, (2) `crictl logs ` or check `/var/log/pods/kube-system_kube-apiserver-*/` for error messages that will point to the YAML issue, (3) `journalctl -u kubelet -f` to see kubelet's attempts to start the static pod. To fix, edit the manifest directly: `sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml` and correct the syntax error. Kubelet will automatically detect the file change and restart the API server. Pro tip: always run `kubectl apply --dry-run=client -f` on manifest changes before editing static pod files, or keep a backup: `sudo cp kube-apiserver.yaml kube-apiserver.yaml.bak` before making changes.

Task: Practice node management operations.

Note: This exercise requires a cluster with at least one worker node. If using minikube or kind, some operations may differ.

Steps:

  1. View cluster nodes:
Terminal window
kubectl get nodes -o wide
  1. Examine a node:
Terminal window
kubectl describe node <node-name> | head -50
  1. Check static pod manifests (on control plane):
Terminal window
# If you have SSH access to control plane
ls /etc/kubernetes/manifests/
cat /etc/kubernetes/manifests/kube-apiserver.yaml | head -30
  1. Practice cordon/uncordon:
Terminal window
# Cordon a worker node
kubectl cordon <worker-node>
kubectl get nodes
# Should show SchedulingDisabled
# Try to schedule a pod
kubectl run test-pod --image=nginx
# Check where it landed
kubectl get pods -o wide
# Won't be on cordoned node
# Uncordon
kubectl uncordon <worker-node>
kubectl get nodes
  1. Practice drain (careful in production!):
Terminal window
# Create a deployment first
kubectl create deployment drain-test --image=nginx --replicas=2
# Check pod locations
kubectl get pods -o wide
# Drain a node with pods
kubectl drain <node-with-pods> --ignore-daemonsets
# Check pods moved
kubectl get pods -o wide
# Uncordon the node
kubectl uncordon <node-name>
  1. Check certificates (on control plane):
Terminal window
# If you have access to control plane
kubeadm certs check-expiration
  1. Generate join command:
Terminal window
# On control plane
kubeadm token create --print-join-command
  1. Cleanup:
Terminal window
kubectl delete deployment drain-test
kubectl delete pod test-pod

Success Criteria:

  • Can view and describe nodes
  • Understand cordon vs drain
  • Know where static pod manifests are stored
  • Can generate new join tokens
  • Understand the kubeadm init process

Drill 1: Node Management Commands (Target: 3 minutes)

Section titled “Drill 1: Node Management Commands (Target: 3 minutes)”
Terminal window
# List nodes with details
kubectl get nodes -o wide
# Get node labels
kubectl get nodes --show-labels
# Describe a node
kubectl describe node <node-name> | head -50
# Check node conditions
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.conditions[?(@.type=="Ready")].status}{"\n"}{end}'
# Check node resources
kubectl describe node <node-name> | grep -A10 "Allocated resources"

Drill 2: Cordon and Uncordon (Target: 5 minutes)

Section titled “Drill 2: Cordon and Uncordon (Target: 5 minutes)”
Terminal window
# Cordon a node (prevent new pods)
kubectl cordon <worker-node>
# Verify
kubectl get nodes # Shows SchedulingDisabled
# Try to schedule a pod
kubectl run cordon-test --image=nginx
kubectl get pods -o wide # Won't be on cordoned node
# Uncordon
kubectl uncordon <worker-node>
kubectl get nodes # Back to Ready
# Cleanup
kubectl delete pod cordon-test

Drill 3: Drain and Recover (Target: 5 minutes)

Section titled “Drill 3: Drain and Recover (Target: 5 minutes)”
Terminal window
# Create test deployment
kubectl create deployment drain-test --image=nginx --replicas=3
# Wait for pods
kubectl wait --for=condition=available deployment/drain-test --timeout=60s
kubectl get pods -o wide
# Drain a worker node
kubectl drain <worker-node> --ignore-daemonsets --delete-emptydir-data
# Watch pods move to other nodes
kubectl get pods -o wide
# Uncordon the node
kubectl uncordon <worker-node>
# Cleanup
kubectl delete deployment drain-test

Drill 4: kubeadm Token Management (Target: 3 minutes)

Section titled “Drill 4: kubeadm Token Management (Target: 3 minutes)”
Terminal window
# List existing tokens
kubeadm token list
# Create a new token
kubeadm token create
# Create token with specific TTL
kubeadm token create --ttl 2h
# Generate full join command
kubeadm token create --print-join-command
# Delete a token
kubeadm token delete <token-id>

Drill 5: Static Pod Exploration (Target: 5 minutes)

Section titled “Drill 5: Static Pod Exploration (Target: 5 minutes)”
/etc/kubernetes/manifests
# Find static pod manifest directory
cat /var/lib/kubelet/config.yaml | grep staticPodPath
# List static pod manifests
ls -la /etc/kubernetes/manifests/
# View one manifest
cat /etc/kubernetes/manifests/kube-apiserver.yaml | head -30
# Create your own static pod
cat << 'EOF' | sudo tee /etc/kubernetes/manifests/my-static-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: my-static-pod
namespace: default
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
EOF
# Wait and verify (will have node name suffix)
sleep 10
kubectl get pods | grep my-static-pod
# Remove static pod
sudo rm /etc/kubernetes/manifests/my-static-pod.yaml

Drill 6: Certificate Inspection (Target: 5 minutes)

Section titled “Drill 6: Certificate Inspection (Target: 5 minutes)”
Terminal window
# Check certificate expiration (on control plane)
kubeadm certs check-expiration
# View certificate details
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout | head -30
# Check all certificates
ls -la /etc/kubernetes/pki/
# Check CA certificate
openssl x509 -in /etc/kubernetes/pki/ca.crt -text -noout | grep -E "Subject:|Issuer:|Not"

Drill 7: Troubleshooting - Node NotReady (Target: 5 minutes)

Section titled “Drill 7: Troubleshooting - Node NotReady (Target: 5 minutes)”
Terminal window
# Simulate: Stop kubelet on a worker
# (Run on worker node)
sudo systemctl stop kubelet
# On control plane, diagnose
kubectl get nodes # Shows NotReady
kubectl describe node <worker> | grep -A10 Conditions
# Check what's happening
kubectl get events --field-selector involvedObject.kind=Node
# Fix: Restart kubelet (on worker)
sudo systemctl start kubelet
# Verify recovery
kubectl get nodes -w

Drill 8: Challenge - Node Maintenance Workflow

Section titled “Drill 8: Challenge - Node Maintenance Workflow”

Perform a complete maintenance workflow:

  1. Cordon the node
  2. Drain all workloads
  3. Simulate maintenance (wait 30s)
  4. Uncordon the node
  5. Verify pods can be scheduled again
Terminal window
# YOUR TASK: Complete this without looking at solution
NODE_NAME=<your-worker-node>
kubectl create deployment maint-test --image=nginx --replicas=2
# Start timer - Target: 3 minutes total
Solution
Terminal window
NODE_NAME=worker-01 # Replace with your node
# 1. Cordon
kubectl cordon $NODE_NAME
# 2. Drain
kubectl drain $NODE_NAME --ignore-daemonsets --delete-emptydir-data
# 3. Verify pods moved
kubectl get pods -o wide
# 4. Simulate maintenance
echo "Performing maintenance..."
sleep 30
# 5. Uncordon
kubectl uncordon $NODE_NAME
# 6. Verify scheduling works
kubectl scale deployment maint-test --replicas=4
kubectl get pods -o wide # Some should land on $NODE_NAME
# Cleanup
kubectl delete deployment maint-test

Congratulations! You’ve completed Part 1: Cluster Architecture, Installation & Configuration.

You now understand:

  • ✅ Control plane components and how they interact
  • ✅ Extension interfaces: CNI, CSI, CRI
  • ✅ Helm for package management
  • ✅ Kustomize for configuration management
  • ✅ CRDs and Operators for extending Kubernetes
  • ✅ RBAC for access control
  • ✅ kubeadm for cluster management

Quick links for review:

ModuleTopicKey Skills
1.1Control Plane Deep-DiveComponent roles, troubleshooting, static pods
1.2Extension InterfacesCNI/CSI/CRI, crictl, plugin troubleshooting
1.3HelmInstall, upgrade, rollback, values
1.4KustomizeBase/overlay, patches, kubectl -k
1.5CRDs & OperatorsCreate CRDs, manage custom resources
1.6RBACRoles, bindings, ServiceAccounts, can-i
1.7kubeadm BasicsInit, join, cordon, drain, tokens

📝 Before moving on: Complete the Part 1 Cumulative Quiz to test your retention.


Continue to Part 2: Workloads & Scheduling - Learn how to deploy and manage applications.

This covers 15% of the exam and builds directly on what you’ve learned about cluster architecture.