Module 3.7: CNI & Cluster Networking
Complexity:
[MEDIUM]- Understanding network infrastructureTime to Complete: 40-50 minutes
Prerequisites: Module 1.2 (Extension Interfaces), Module 3.1 (Services)
What You’ll Be Able to Do
Section titled “What You’ll Be Able to Do”After this module, you will be able to:
- Explain how CNI plugins assign IP addresses and configure routes for pods
- Compare Calico, Cilium, and Flannel on features, performance, and NetworkPolicy support
- Diagnose CNI failures by checking pod networking, CNI configuration files, and plugin logs
- Trace pod-to-pod traffic through the CNI overlay or native routing path
Why This Module Matters
Section titled “Why This Module Matters”The Container Network Interface (CNI) is the plugin system that gives pods their network connectivity. Without CNI, pods can’t communicate. Understanding CNI helps you troubleshoot network issues, choose the right network plugin, and understand why pods can (or can’t) talk to each other.
The CKA exam expects you to understand pod networking fundamentals, troubleshoot network issues, and know how different CNI plugins affect cluster behavior (especially Network Policy support).
The City Infrastructure Analogy
Think of CNI as the city planning department. They decide how streets (networks) are laid out, how addresses (IPs) are assigned to buildings (pods), and which neighborhoods (nodes) connect to which. Different CNI plugins are like different city designs—some have highways (high performance), some have security checkpoints (Network Policy).
What You’ll Learn
Section titled “What You’ll Learn”By the end of this module, you’ll be able to:
- Understand the Kubernetes network model
- Know how CNI plugins work
- Compare popular CNI options
- Troubleshoot pod networking issues
- Understand how kube-proxy manages service traffic
Did You Know?
Section titled “Did You Know?”-
No built-in networking: Kubernetes doesn’t ship with networking. You must install a CNI plugin for pods to communicate.
-
Flannel = no NetworkPolicy: The popular Flannel CNI doesn’t support Network Policies. If you need policies, use Calico, Cilium, or Weave.
-
Pod CIDR is per-node: Each node typically gets its own IP range (e.g., 10.244.1.0/24), and pods on that node get IPs from that range.
Part 1: Kubernetes Network Model
Section titled “Part 1: Kubernetes Network Model”1.1 The Four Requirements
Section titled “1.1 The Four Requirements”Kubernetes networking has four fundamental requirements:
┌────────────────────────────────────────────────────────────────┐│ Kubernetes Network Requirements ││ ││ 1. Pod-to-Pod: All pods can communicate with all pods ││ without NAT ││ ┌─────┐ ┌─────┐ ││ │Pod A│◄────►│Pod B│ (direct IP, no NAT) ││ └─────┘ └─────┘ ││ ││ 2. Node-to-Pod: Nodes can communicate with all pods ││ without NAT ││ ┌─────┐ ┌─────┐ ││ │Node │◄────►│ Pod │ (direct access) ││ └─────┘ └─────┘ ││ ││ 3. Pod IP = Seen IP: The IP a pod sees is the same IP ││ others see ││ Pod thinks: "My IP is 10.244.1.5" ││ Others see: "Pod's IP is 10.244.1.5" ││ ││ 4. Pod-to-Service: Pods can reach services by ClusterIP ││ ┌─────┐ ┌───────────┐ ┌─────┐ ││ │ Pod │─────►│ Service │─────►│ Pod │ ││ └─────┘ └───────────┘ └─────┘ ││ │└────────────────────────────────────────────────────────────────┘1.2 What CNI Provides
Section titled “1.2 What CNI Provides”| Responsibility | Component |
|---|---|
| Pod IP allocation | CNI plugin (IPAM) |
| Pod-to-pod routing | CNI plugin |
| Cross-node networking | CNI plugin |
| Network Policy enforcement | CNI plugin (if supported) |
| Service ClusterIP routing | kube-proxy |
1.3 Network Namespaces
Section titled “1.3 Network Namespaces”┌────────────────────────────────────────────────────────────────┐│ Network Namespaces ││ ││ Node ││ ┌────────────────────────────────────────────────────────┐ ││ │ Host Network Namespace │ ││ │ eth0: 192.168.1.10 │ ││ │ │ ││ │ ┌─────────────────┐ ┌─────────────────┐ │ ││ │ │ Pod A │ │ Pod B │ │ ││ │ │ Network NS │ │ Network NS │ │ ││ │ │ │ │ │ │ ││ │ │ eth0:10.244.1.5 │ │ eth0:10.244.1.6 │ │ ││ │ │ │ │ │ │ ││ │ └────────┬────────┘ └────────┬────────┘ │ ││ │ │ │ │ ││ │ └───────────┬───────────┘ │ ││ │ │ │ ││ │ ┌─────┴─────┐ │ ││ │ │ Bridge │ │ ││ │ │ (cni0) │ │ ││ │ └─────┬─────┘ │ ││ │ │ │ ││ └──────────────────────────┼──────────────────────────────┘ ││ │ ││ To other nodes ││ │└────────────────────────────────────────────────────────────────┘Part 2: CNI Plugins
Section titled “Part 2: CNI Plugins”Pause and predict: You are choosing a CNI for a new cluster. The requirements are: must support NetworkPolicy, must work on bare metal (no cloud), and the team has limited networking expertise. Looking at the comparison table below, which CNI would you choose and why?
2.1 Popular CNI Plugins
Section titled “2.1 Popular CNI Plugins”| Plugin | Network Policy | Performance | Use Case |
|---|---|---|---|
| Calico | Yes | High | Enterprise, security-focused |
| Cilium | Yes (advanced) | Very high | eBPF, observability |
| Flannel | No | Medium | Simple clusters |
| Weave | Yes | Medium | Multi-cloud |
| Canal | Yes | Medium | Calico policy + Flannel networking |
| AWS VPC CNI | Via Calico | High | EKS native |
2.2 How CNI Works
Section titled “2.2 How CNI Works”┌────────────────────────────────────────────────────────────────┐│ CNI Plugin Flow ││ ││ 1. Pod Created ││ │ ││ ▼ ││ 2. Kubelet calls CNI plugin (ADD) ││ │ ││ ▼ ││ 3. CNI creates network namespace ││ │ ││ ▼ ││ 4. CNI assigns IP address (IPAM) ││ │ ││ ▼ ││ 5. CNI sets up veth pair ││ │ ││ ▼ ││ 6. CNI configures routing ││ │ ││ ▼ ││ 7. Pod is network-ready ││ ││ Pod Deleted: ││ CNI plugin called with DEL → Cleanup ││ │└────────────────────────────────────────────────────────────────┘2.3 CNI Configuration Location
Section titled “2.3 CNI Configuration Location”# CNI binary locationls /opt/cni/bin/
# CNI configuration locationls /etc/cni/net.d/
# Example: View CNI configcat /etc/cni/net.d/10-calico.conflist2.4 Checking CNI Status
Section titled “2.4 Checking CNI Status”# Check which CNI is installedls /etc/cni/net.d/
# Check CNI podsk get pods -n kube-system | grep -E "calico|flannel|weave|cilium"
# Check CNI daemonsetk get daemonset -n kube-system
# View CNI configurationcat /etc/cni/net.d/*.conf* 2>/dev/nullPart 3: Pod Networking Deep Dive
Section titled “Part 3: Pod Networking Deep Dive”3.1 Pod IP Allocation
Section titled “3.1 Pod IP Allocation”┌────────────────────────────────────────────────────────────────┐│ IP Allocation ││ ││ Cluster CIDR: 10.244.0.0/16 ││ ││ Node 1: 10.244.0.0/24 Node 2: 10.244.1.0/24 ││ ┌──────────────────────┐ ┌──────────────────────┐ ││ │ Pod: 10.244.0.5 │ │ Pod: 10.244.1.3 │ ││ │ Pod: 10.244.0.6 │ │ Pod: 10.244.1.4 │ ││ │ Pod: 10.244.0.7 │ │ Pod: 10.244.1.5 │ ││ └──────────────────────┘ └──────────────────────┘ ││ ││ Node 3: 10.244.2.0/24 ││ ┌──────────────────────┐ ││ │ Pod: 10.244.2.2 │ ││ │ Pod: 10.244.2.3 │ ││ └──────────────────────┘ ││ │└────────────────────────────────────────────────────────────────┘3.2 Viewing Pod Network Configuration
Section titled “3.2 Viewing Pod Network Configuration”# Get pod IPk get pod <pod> -o widek get pod <pod> -o jsonpath='{.status.podIP}'
# Get all pod IPsk get pods -o custom-columns='NAME:.metadata.name,IP:.status.podIP'
# Check which node a pod is onk get pod <pod> -o jsonpath='{.spec.nodeName}'
# View pod network namespace (from node)# First, get container IDcrictl ps | grep <pod-name># Then inspect networkcrictl inspect <container-id> | jq '.info.runtimeSpec.linux.namespaces'3.3 Pod-to-Pod Communication (Same Node)
Section titled “3.3 Pod-to-Pod Communication (Same Node)”┌────────────────────────────────────────────────────────────────┐│ Same-Node Communication ││ ││ Node ││ ┌────────────────────────────────────────────────────────┐ ││ │ │ ││ │ Pod A Pod B │ ││ │ 10.244.1.5 10.244.1.6 │ ││ │ ┌─────────┐ ┌─────────┐ │ ││ │ │ eth0 │ │ eth0 │ │ ││ │ └────┬────┘ └────┬────┘ │ ││ │ │ veth pair │ veth pair │ ││ │ │ │ │ ││ │ ┌────┴────┐ ┌────┴────┐ │ ││ │ │ veth-a │ │ veth-b │ │ ││ │ └────┬────┘ └────┬────┘ │ ││ │ │ │ │ ││ │ └───────────┬─────────────┘ │ ││ │ │ │ ││ │ ┌─────┴─────┐ │ ││ │ │ Bridge │ (cni0 or cbr0) │ ││ │ │10.244.1.1 │ │ ││ │ └───────────┘ │ ││ │ │ ││ └────────────────────────────────────────────────────────┘ ││ ││ Traffic: Pod A → veth-a → bridge → veth-b → Pod B ││ │└────────────────────────────────────────────────────────────────┘3.4 Pod-to-Pod Communication (Different Nodes)
Section titled “3.4 Pod-to-Pod Communication (Different Nodes)”┌────────────────────────────────────────────────────────────────┐│ Cross-Node Communication ││ ││ Node 1 (192.168.1.10) Node 2 (192.168.1.11) ││ ┌───────────────────────┐ ┌───────────────────────┐ ││ │ │ │ │ ││ │ Pod A: 10.244.1.5 │ │ Pod B: 10.244.2.6 │ ││ │ ┌─────────┐ │ │ ┌─────────┐ │ ││ │ │ veth │ │ │ │ veth │ │ ││ │ └────┬────┘ │ │ └────┬────┘ │ ││ │ │ │ │ │ │ ││ │ ┌────┴────┐ │ │ ┌────┴────┐ │ ││ │ │ Bridge │ │ │ │ Bridge │ │ ││ │ └────┬────┘ │ │ └────┬────┘ │ ││ │ │ │ │ │ │ ││ │ ┌────┴────┐ │ │ ┌────┴────┐ │ ││ │ │ eth0 │ │ │ │ eth0 │ │ ││ │ └────┬────┘ │ │ └────┬────┘ │ ││ │ │ │ │ │ │ ││ └───────┼───────────────┘ └───────────────┼───────┘ ││ │ │ ││ └──────────────────────────────────────┘ ││ Overlay or Routing ││ (VXLAN, IPIP, BGP, etc.) ││ │└────────────────────────────────────────────────────────────────┘Part 4: kube-proxy and Services
Section titled “Part 4: kube-proxy and Services”4.1 kube-proxy Modes
Section titled “4.1 kube-proxy Modes”| Mode | Description | Performance | Use Case |
|---|---|---|---|
| iptables | Uses iptables rules | Good | Default, most clusters |
| IPVS | Uses kernel IPVS | Better | High pod count, advanced LB |
| userspace | Legacy, user-space proxy | Poor | Never use (deprecated) |
4.2 How kube-proxy Works
Section titled “4.2 How kube-proxy Works”┌────────────────────────────────────────────────────────────────┐│ kube-proxy Flow ││ ││ Client Pod ││ │ ││ │ Request to Service IP 10.96.45.123:80 ││ ▼ ││ ┌───────────────────────────────────────────────────────┐ ││ │ iptables / IPVS │ ││ │ │ ││ │ PREROUTING chain: │ ││ │ 10.96.45.123:80 → DNAT to pod IP (random selection) │ ││ │ │ ││ │ Selected: 10.244.1.5:8080 │ ││ └───────────────────────────────────────────────────────┘ ││ │ ││ ▼ ││ Backend Pod (10.244.1.5:8080) ││ ││ kube-proxy watches API server for Service/Endpoint changes ││ and updates iptables/IPVS rules accordingly ││ │└────────────────────────────────────────────────────────────────┘4.3 Checking kube-proxy
Section titled “4.3 Checking kube-proxy”# Check kube-proxy podsk get pods -n kube-system -l k8s-app=kube-proxy
# Check kube-proxy modek logs -n kube-system -l k8s-app=kube-proxy | grep "Using"
# View kube-proxy configmapk get configmap kube-proxy -n kube-system -o yaml
# Check iptables rules (on node)iptables -t nat -L KUBE-SERVICES -n | head -20
# Check IPVS rules (if using IPVS mode, on node)ipvsadm -LnStop and think: A new pod is stuck in
ContainerCreatingstate. You check events and see “network plugin is not ready”. Before you start changing CNI configuration, what three things would you check first to determine if this is a CNI installation issue or a node-specific problem?
Part 5: Troubleshooting Network Issues
Section titled “Part 5: Troubleshooting Network Issues”5.1 Network Debugging Workflow
Section titled “5.1 Network Debugging Workflow”Pod Network Issue? │ ├── kubectl get pod -o wide (check pod IP, node) │ ├── Pod has IP? │ ├── No → CNI issue │ │ Check: CNI pods, /etc/cni/net.d/, CNI logs │ │ │ └── Yes → Continue │ ├── Can reach other pods on same node? │ ├── No → Bridge/veth issue │ │ │ └── Yes → Continue │ ├── Can reach pods on other nodes? │ ├── No → Overlay/routing issue │ │ Check: CNI config, node routes, firewall │ │ │ └── Yes → Continue │ ├── Can reach services? │ ├── No → kube-proxy or DNS issue │ │ Check: kube-proxy, CoreDNS, iptables │ │ │ └── Yes → Network is fine, check app │ └── Check NetworkPolicy kubectl get networkpolicy5.2 Common Debugging Commands
Section titled “5.2 Common Debugging Commands”# Check pod networkk exec <pod> -- ip addrk exec <pod> -- ip routek exec <pod> -- cat /etc/resolv.conf
# Test connectivityk exec <pod> -- ping <other-pod-ip>k exec <pod> -- nc -zv <service> <port>k exec <pod> -- wget --spider --timeout=1 http://<service>
# Check CNI podsk get pods -n kube-system | grep -E "calico|flannel|weave|cilium"k logs -n kube-system <cni-pod>
# Check kube-proxyk get pods -n kube-system -l k8s-app=kube-proxyk logs -n kube-system -l k8s-app=kube-proxy
# Check CoreDNSk get pods -n kube-system -l k8s-app=kube-dnsk logs -n kube-system -l k8s-app=kube-dns5.3 Common Network Issues
Section titled “5.3 Common Network Issues”| Symptom | Cause | Solution |
|---|---|---|
| Pod stuck in ContainerCreating | CNI not installed or failing | Install/fix CNI plugin |
| Pod has no IP | IPAM exhausted or CNI error | Check CNI logs, expand CIDR |
| Can’t reach pods on other nodes | Overlay misconfigured | Check CNI network config |
| Services unreachable | kube-proxy not running | Check kube-proxy pods |
| DNS not working | CoreDNS down | Check CoreDNS pods |
| NetworkPolicy not working | CNI doesn’t support it | Use Calico, Cilium, or Weave |
Part 6: Cluster CIDR Configuration
Section titled “Part 6: Cluster CIDR Configuration”6.1 Understanding CIDRs
Section titled “6.1 Understanding CIDRs”| CIDR Type | Description | Example |
|---|---|---|
| Pod CIDR | IP range for all pods | 10.244.0.0/16 |
| Service CIDR | IP range for services | 10.96.0.0/12 |
| Node CIDR | Pod range per node | 10.244.1.0/24 |
6.2 Checking CIDR Configuration
Section titled “6.2 Checking CIDR Configuration”# Check pod CIDR (from kube-controller-manager)k get cm kubeadm-config -n kube-system -o yaml | grep -i cidr
# Check from nodesk get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
# Check service CIDRk cluster-info dump | grep -m 1 service-cluster-ip-range6.3 kubeadm CIDR Configuration
Section titled “6.3 kubeadm CIDR Configuration”# During cluster initkubeadm init --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
# The CNI plugin must match this CIDR# Example: Calico installation with matching CIDRPause and predict: You set
hostNetwork: trueon a pod running nginx on port 80, and there is already another pod withhostNetwork: truerunning on port 80 on the same node. What happens when Kubernetes tries to schedule your pod?
Part 7: Host Network and Node Ports
Section titled “Part 7: Host Network and Node Ports”7.1 hostNetwork Pods
Section titled “7.1 hostNetwork Pods”# Pod using host networkapiVersion: v1kind: Podmetadata: name: host-network-podspec: hostNetwork: true # Uses node's network namespace containers: - name: nginx image: nginx ports: - containerPort: 80 # Binds to node's port 80!When to use:
- Network tools that need raw access
- Some CNI components
- High-performance networking
7.2 hostPort
Section titled “7.2 hostPort”# Pod with host port mappingapiVersion: v1kind: Podmetadata: name: hostport-podspec: containers: - name: nginx image: nginx ports: - containerPort: 8080 hostPort: 80 # Node's port 80 → container 8080Differences:
hostNetwork: true- Pod uses node’s entire network stackhostPort- Only maps specific port, pod still has its own IP
Common Mistakes
Section titled “Common Mistakes”| Mistake | Problem | Solution |
|---|---|---|
| No CNI installed | Pods can’t get IPs | Install CNI before deploying pods |
| CIDR mismatch | CNI and kubeadm disagree | Ensure pod-network-cidr matches CNI config |
| Flannel + NetworkPolicy | Policies ignored | Use Calico, Cilium, or Weave |
| hostNetwork without dnsPolicy | DNS breaks | Set dnsPolicy: ClusterFirstWithHostNet |
| Port exhaustion | Can’t schedule pods | Check CIDR size, clean up pods |
-
After initializing a cluster with
kubeadm init, you notice all pods except those inkube-systemare stuck inPendingorContainerCreating. CoreDNS pods also showContainerCreating. What is the root cause and what must you do before deploying any workloads?Answer
No CNI plugin is installed. Kubernetes does not ship with networking -- you must install a CNI plugin (Calico, Cilium, Flannel, etc.) before pods can get IP addresses and communicate. CoreDNS pods are also stuck because they need pod networking to start. The fix: install a CNI plugin that matches the `--pod-network-cidr` specified during `kubeadm init`. For example, if you used `--pod-network-cidr=10.244.0.0/16`, install Calico or Flannel configured for that CIDR. Until the CNI is installed, the node will show `NotReady` status. -
Your cluster uses Flannel and a security team member creates NetworkPolicies to isolate the production namespace. After deploying the policies, they test and find that pods can still communicate freely across namespaces. The YAML is correct. What went wrong?
Answer
Flannel does not support NetworkPolicy enforcement. The API server accepts the NetworkPolicy objects (they are valid Kubernetes resources), but without a CNI that implements the network policy controller, they are never enforced. This is a dangerous situation because it gives a false sense of security. The options are: (1) Replace Flannel with Calico, Cilium, or Weave which natively support NetworkPolicy; (2) Install Canal, which combines Flannel's networking with Calico's policy engine; or (3) add a standalone policy engine alongside Flannel (e.g., Calico policy-only mode). -
Pods on Node A can reach pods on Node A, but cannot reach pods on Node B. All pods have IPs and are in Running state. Both nodes show
Ready. Where in the networking stack is the problem, and how would you diagnose it?Answer
This is a CNI cross-node routing issue. Same-node traffic works (the bridge handles it), but cross-node traffic fails, pointing to the overlay or routing layer. Diagnosis steps: (1) Check CNI daemon pods on both nodes: `k get pods -n kube-system -o wide | grep -E "calico|flannel|cilium"`. (2) Check if the CNI tunnel interface exists on both nodes (e.g., `flannel.1` for VXLAN, `tunl0` for Calico IPIP). (3) Verify the node-to-node path allows the CNI protocol (VXLAN uses UDP 4789, BGP uses TCP 179 -- check cloud security groups or host firewalls). (4) Check routes on each node: `ip route` should show routes to the other node's pod CIDR. -
Your cluster runs 8,000 Services. During peak traffic, kube-proxy on each node takes 30 seconds to update rules after a Service change, and CPU spikes on all nodes. The cluster uses iptables mode. What is happening and what is the recommended fix?
Answer
In iptables mode, kube-proxy creates iptables rules for every Service and Endpoint combination. With 8,000 Services, this generates tens of thousands of rules. Every change requires rewriting a large portion of the iptables ruleset, causing the CPU spike. Rule evaluation is also O(n), slowing packet processing. The fix is to switch kube-proxy to IPVS mode (edit the kube-proxy ConfigMap: `mode: "ipvs"`) which uses a kernel-level hash table for O(1) lookups and handles rule updates more efficiently. Alternatively, for even better performance, consider using Cilium in eBPF kube-proxy replacement mode, which moves Service routing entirely into eBPF programs. -
A developer created a pod with
hostNetwork: truebut did not setdnsPolicy. The pod can reach external websites by IP but cannot resolve any cluster service names. External DNS resolution (likegoogle.com) works fine. Explain the root cause and the one-line fix.Answer
When `hostNetwork: true` is set, the pod shares the node's network namespace, including its `/etc/resolv.conf`. The node's resolv.conf points to the infrastructure DNS server (e.g., the cloud provider's DNS or a corporate DNS), not CoreDNS. This DNS server knows about external names like `google.com` but nothing about cluster-internal names like `my-svc.default.svc.cluster.local`. The fix: set `dnsPolicy: ClusterFirstWithHostNet` in the pod spec. This tells the kubelet to inject the CoreDNS address into the pod's resolv.conf, enabling cluster DNS resolution even though the pod uses the host network.
Hands-On Exercise
Section titled “Hands-On Exercise”Task: Investigate cluster networking configuration.
Steps:
- Check CNI plugin installed:
# Check CNI podsk get pods -n kube-system | grep -E "calico|flannel|weave|cilium|cni"
# Check CNI configurationls /etc/cni/net.d/ 2>/dev/null || echo "Run on node"- Check pod CIDR:
# Get node CIDRsk get nodes -o jsonpath='{range .items[*]}{.metadata.name}{": "}{.spec.podCIDR}{"\n"}{end}'- Create test pods:
k run pod1 --image=busybox:1.36 --command -- sleep 3600k run pod2 --image=busybox:1.36 --command -- sleep 3600
# Wait for readyk wait --for=condition=ready pod/pod1 pod/pod2 --timeout=60s- Check pod network configuration:
# Get pod IPsk get pods -o wide
# Check pod network interfacek exec pod1 -- ip addrk exec pod1 -- ip route
# Check DNS configurationk exec pod1 -- cat /etc/resolv.conf- Test pod-to-pod connectivity:
POD2_IP=$(k get pod pod2 -o jsonpath='{.status.podIP}')k exec pod1 -- ping -c 3 $POD2_IP- Check kube-proxy:
# Check kube-proxy podsk get pods -n kube-system -l k8s-app=kube-proxy
# Check kube-proxy logs for modek logs -n kube-system -l k8s-app=kube-proxy --tail=5 | grep -i mode- Test service connectivity:
# Create servicek create deployment web --image=nginxk expose deployment web --port=80
# Test DNS and connectivityk exec pod1 -- wget --spider --timeout=2 http://web- Cleanup:
k delete pod pod1 pod2k delete deployment webk delete svc webSuccess Criteria:
- Can identify CNI plugin in use
- Understand pod CIDR allocation
- Can verify pod-to-pod connectivity
- Know how to check kube-proxy
- Understand network troubleshooting
Practice Drills
Section titled “Practice Drills”Drill 1: Identify CNI (Target: 2 minutes)
Section titled “Drill 1: Identify CNI (Target: 2 minutes)”# Check CNI pods in kube-systemk get pods -n kube-system | grep -E "calico|flannel|weave|cilium|canal"
# Check CNI daemonsetsk get ds -n kube-system
# Check node annotations for CNIk get nodes -o jsonpath='{.items[0].metadata.annotations}' | jq 'keys'Drill 2: Check Pod CIDR (Target: 2 minutes)
Section titled “Drill 2: Check Pod CIDR (Target: 2 minutes)”# Get pod CIDR from nodesk get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
# Check from kubeadm config (if available)k get cm kubeadm-config -n kube-system -o yaml 2>/dev/null | grep -i cidr
# Check from controller-managerk get pods -n kube-system -l component=kube-controller-manager -o yaml | grep cluster-cidrDrill 3: Pod Network Info (Target: 3 minutes)
Section titled “Drill 3: Pod Network Info (Target: 3 minutes)”# Create test podk run net-test --image=busybox:1.36 --command -- sleep 3600k wait --for=condition=ready pod/net-test --timeout=60s
# Check network infok exec net-test -- ip addrk exec net-test -- ip routek exec net-test -- cat /etc/resolv.conf
# Cleanupk delete pod net-testDrill 4: kube-proxy Mode (Target: 2 minutes)
Section titled “Drill 4: kube-proxy Mode (Target: 2 minutes)”# Check kube-proxy configmapk get configmap kube-proxy -n kube-system -o yaml | grep -A5 "mode:"
# Check from logsk logs -n kube-system -l k8s-app=kube-proxy --tail=20 | grep -i "using"
# List kube-proxy podsk get pods -n kube-system -l k8s-app=kube-proxy -o wideDrill 5: Test Pod Connectivity (Target: 4 minutes)
Section titled “Drill 5: Test Pod Connectivity (Target: 4 minutes)”# Create podsk run client --image=busybox:1.36 --command -- sleep 3600k run server --image=nginxk wait --for=condition=ready pod/client pod/server --timeout=60s
# Get server IPSERVER_IP=$(k get pod server -o jsonpath='{.status.podIP}')
# Test connectivityk exec client -- ping -c 2 $SERVER_IPk exec client -- wget --spider --timeout=2 http://$SERVER_IP
# Cleanupk delete pod client serverDrill 6: Service Routing Check (Target: 3 minutes)
Section titled “Drill 6: Service Routing Check (Target: 3 minutes)”# Create deployment and servicek create deployment svc-test --image=nginxk expose deployment svc-test --port=80k wait --for=condition=available deployment/svc-test --timeout=60s
# Get ClusterIPCLUSTER_IP=$(k get svc svc-test -o jsonpath='{.spec.clusterIP}')
# Test servicek run test --rm -it --image=busybox:1.36 --restart=Never -- \ wget --spider --timeout=2 http://$CLUSTER_IP
# Cleanupk delete deployment svc-testk delete svc svc-testDrill 7: hostNetwork Pod (Target: 3 minutes)
Section titled “Drill 7: hostNetwork Pod (Target: 3 minutes)”# Create hostNetwork podcat << 'EOF' | k apply -f -apiVersion: v1kind: Podmetadata: name: host-netspec: hostNetwork: true dnsPolicy: ClusterFirstWithHostNet containers: - name: test image: busybox:1.36 command: ["sleep", "3600"]EOF
k wait --for=condition=ready pod/host-net --timeout=60s
# Check - IP should match node IPk get pod host-net -o wide
# Compare with nodek exec host-net -- ip addr
# Test can still resolve servicesk exec host-net -- nslookup kubernetes
# Cleanupk delete pod host-netDrill 8: Challenge - Network Troubleshooting
Section titled “Drill 8: Challenge - Network Troubleshooting”Without looking at solutions:
- Create two pods:
clientandserver(nginx) - Get both pod IPs
- Test ping from client to server
- Create a service for server
- Test DNS resolution of service from client
- Test HTTP connectivity to service from client
- Check which CNI is running
- Cleanup everything
# YOUR TASK: Complete in under 5 minutesSolution
# 1. Create podsk run client --image=busybox:1.36 --command -- sleep 3600k run server --image=nginxk wait --for=condition=ready pod/client pod/server --timeout=60s
# 2. Get IPsk get pods -o wide
# 3. Test pingSERVER_IP=$(k get pod server -o jsonpath='{.status.podIP}')k exec client -- ping -c 2 $SERVER_IP
# 4. Create servicek expose pod server --port=80 --name=server-svc
# 5. Test DNSk exec client -- nslookup server-svc
# 6. Test HTTPk exec client -- wget --spider --timeout=2 http://server-svc
# 7. Check CNIk get pods -n kube-system | grep -E "calico|flannel|weave|cilium"
# 8. Cleanupk delete pod client serverk delete svc server-svcNext Steps
Section titled “Next Steps”Congratulations on completing Part 3! You now understand:
- Services and how to expose applications
- Endpoints and how services track pods
- DNS and service discovery
- Ingress for HTTP routing
- Gateway API for next-gen routing
- Network Policies for security
- CNI and cluster networking
Take the Part 3 Cumulative Quiz to test your knowledge.