You can’t practice Kubernetes administration without a Kubernetes cluster. Sounds obvious, right? Yet many CKA candidates rely entirely on managed clusters (EKS, GKE, AKS) or single-node setups (minikube, kind) and then freeze when the exam asks them to troubleshoot kubelet on a worker node.
The CKA exam runs on kubeadm-provisioned clusters. Not managed Kubernetes. Not Docker Desktop. Real kubeadm clusters with separate control plane and worker nodes.
This module teaches you to build exactly what you’ll encounter in the exam.
The Orchestra Analogy
Think of a Kubernetes cluster like an orchestra. The control plane is the conductor—it doesn’t play any instruments (run your apps), but it coordinates everything: who plays when, how loud, when to start and stop. The worker nodes are the musicians—they do the actual work of producing music (running containers). Without a conductor, you have chaos. Without musicians, you have silence. You need both, working together, communicating constantly.
Kubernetes requires swap to be disabled. This is non-negotiable.
Terminal window
# Disable swap immediately
sudoswapoff-a
# Disable swap permanently (survives reboot)
sudosed-i'/ swap / s/^/#/'/etc/fstab
War Story: The Mysterious OOMKill
A team spent days debugging why their pods kept getting OOMKilled despite having plenty of memory. The culprit? Swap was enabled. When the kubelet reported memory to the scheduler, it didn’t account for swap, leading to over-scheduling and eventual memory pressure. Kubernetes doesn’t manage swap—it expects you to disable it.
Breaking Change Alert: If stat -fc %T /sys/fs/cgroup returns tmpfs instead of cgroup2fs, upgrade your OS before proceeding. Kubernetes 1.35 will not start on cgroup v1 nodes.
If you skip setting SystemdCgroup = true, you’ll get cryptic errors later. The kubelet and containerd must agree on the cgroup driver. Modern systems use systemd. Don’t miss this step.
Gotcha: containerd 2.0 and old images
containerd 2.0 removes support for Docker Schema 1 images. If you have very old images (pushed 5+ years ago), they will fail to pull. Rebuild or re-push them with a modern Docker/buildkit.
# Check kubelet (will be inactive until cluster is initialized)
sudosystemctlstatuskubelet
# Check kubeadm version
kubeadmversion
Stop and think: You’ve installed kubelet, containerd, and kubeadm, but systemctl status kubelet shows it is activating/crashlooping. Why is this expected behavior right now? The kubelet is constantly restarting because it’s looking for its configuration file (/var/lib/kubelet/config.yaml), which won’t exist until kubeadm init or kubeadm join is run.
The node shows NotReady because we haven’t installed a network plugin yet.
Pause and predict: Why would a freshly initialized Kubernetes node be NotReady? It has an API server, etcd, scheduler, and controller manager — all running. What’s missing? The answer: without a CNI (network plugin), pods can’t get IP addresses, and the node can’t report as healthy. This is the #1 “gotcha” for first-time kubeadm users, and it’s a common CKA troubleshooting scenario.
Kubernetes doesn’t come with networking. You must install a CNI plugin. We’ll use Calico (widely used, exam-friendly).
Why Doesn’t Kubernetes Include Networking?
This surprises everyone at first. Kubernetes made a deliberate choice to define a networking model (every pod gets an IP, pods can reach each other) but not implement it. Why? Because networking needs vary wildly—some need advanced policies, some need high performance, some need cloud integration. By using the CNI (Container Network Interface) standard, Kubernetes lets you choose. Calico, Flannel, Cilium, Weave—they all implement the same interface but with different superpowers. It’s like USB: the standard defines how to connect, but you choose your device.
An engineer once spent an hour trying to figure out why their “3-node cluster” only had 2 nodes showing. They ran kubeadm join on all three machines. Turns out, they ran it on the control plane node by mistake (instead of a worker), which silently failed because that node was already in the cluster. The lesson: always verify which node you’re SSH’d into before running commands. The hostname in your terminal prompt is your friend.
kubeadm was created specifically to make cluster setup straightforward. Before kubeadm, setting up Kubernetes involved manually generating certificates, writing systemd unit files, and configuring each component by hand. Some people still do this (“Kubernetes the Hard Way”) for learning, but kubeadm is the production standard.
The CKA exam uses kubeadm clusters. You won’t see managed Kubernetes (EKS/GKE/AKS) on the exam. Everything is kubeadm-based, which is why practicing on kubeadm matters.
containerd replaced Docker as the default container runtime in Kubernetes 1.24. Docker still works (via cri-dockerd), but containerd is simpler and what you’ll encounter in the exam.
Scenario: Your team is provisioning new bare-metal servers for a Kubernetes cluster. A systems engineer suggests leaving 16GB of swap space enabled to prevent out-of-memory kernel panics. You advise against this. Why must swap be disabled for the kubelet to function correctly?
Answer
Kubernetes expects to manage memory allocation directly and definitively for all scheduled pods. When swap is enabled, the underlying operating system can silently move memory pages to disk, completely blinding the kubelet to the true memory utilization of the node. This breaks the Kubernetes scheduler's ability to make accurate placement decisions and guarantees, leading to severe performance degradation and unpredictable out-of-memory (OOM) behavior. If the kubelet detects swap is enabled without explicit overrides, it will immediately crash to prevent the cluster from entering this degraded state.
Scenario: You are initializing a new cluster with kubeadm init and plan to use Flannel for your CNI. A colleague asks why you are explicitly defining --pod-network-cidr=10.244.0.0/16 instead of just running kubeadm init without flags. What is the technical reason for providing this flag?
Answer
The control plane needs to know which IP addresses are reserved for pods so it doesn't assign overlapping addresses to different nodes. The `--pod-network-cidr` flag reserves a massive block of IPs for the entire cluster, which the Kubernetes controller manager then carves up into smaller subnets (/24 blocks) for each individual node. The Container Network Interface (CNI) plugin, like Flannel or Calico, reads this configuration to know exactly which IP addresses it is legally allowed to assign to the pods running on that specific host. Without this flag, the CNI wouldn't know the network boundaries and pod-to-pod routing would fail.
Scenario: You successfully joined worker-02 to the cluster using the kubeadm token. However, 15 minutes later, kubectl get nodes still shows worker-02 with a status of NotReady. You verify the kubelet is running on the node. What is the most likely architectural component missing or failing?
Answer
The most likely cause is that a Container Network Interface (CNI) plugin has not been properly deployed, or its pods are crashing. When a kubelet starts up, it checks for a valid CNI configuration file in `/etc/cni/net.d/`. If this configuration is missing, the kubelet intentionally marks the node as `NotReady` because it physically cannot assign IP addresses to any pods scheduled there. You must apply a CNI manifest (like Calico) to the cluster, which will deploy DaemonSet pods to configure the network on each node and transition them to a `Ready` state.
Scenario: Three days after creating your cluster, you decide to scale out by adding a new worker node. You SSH into the new machine, install containerd and kubelet, but realize you didn’t save the original kubeadm join output. How do you generate the exact command and token needed to authenticate this new node to the API server?
Answer
You must run `kubeadm token create --print-join-command` on the control plane node. Bootstrap tokens generated during `kubeadm init` have a hardcoded security lifespan of exactly 24 hours to prevent unauthorized machines from joining the cluster if the token is leaked. Because three days have passed, the original token has expired and been purged from etcd. This command generates a fresh, cryptographically secure token and immediately outputs the full `kubeadm join` string, complete with the API server endpoint and the required CA certificate hash for secure mutual TLS authentication.
Before you drill: These drills simulate real CKA exam scenarios. Time yourself — the exam gives you ~5 minutes per question on average. If Drill 1 takes you 10 minutes now, that’s fine. By exam day, it should take 2.