Module 5.3: NetworkPolicies
Complexity:
[MEDIUM]- Important for security, requires understanding selectorsTime to Complete: 45-55 minutes
Prerequisites: Module 5.1 (Services), understanding of labels and selectors
Learning Outcomes
Section titled “Learning Outcomes”After completing this module, you will be able to:
- Write NetworkPolicies that restrict ingress and egress traffic using pod, namespace, and CIDR selectors
- Debug blocked traffic by analyzing NetworkPolicy rules and verifying label matches
- Design a default-deny network posture with explicit allow rules for required communication paths
- Explain how NetworkPolicy rules combine and why order of rules does not matter
Why This Module Matters
Section titled “Why This Module Matters”By default, all pods can communicate with all other pods. NetworkPolicies let you control which pods can talk to which, implementing the principle of least privilege for network access. This is critical for security and multi-tenant clusters.
The CKAD exam tests:
- Creating NetworkPolicies
- Understanding ingress and egress rules
- Using selectors to target pods
- Debugging connectivity issues
The Office Building Security Analogy
Think of NetworkPolicies as building security rules. By default, the building has no security—anyone can go anywhere. NetworkPolicies are like adding key card readers. You define who can enter which floors (ingress) and which floors people can leave from (egress). The “default deny” policy is like requiring a key card for every door.
NetworkPolicy Basics
Section titled “NetworkPolicy Basics”Default Behavior
Section titled “Default Behavior”Without NetworkPolicies:
- All pods can communicate with all pods
- All pods can reach external endpoints
- No restrictions
How NetworkPolicies Work
Section titled “How NetworkPolicies Work”- NetworkPolicies are additive—they can only allow traffic, not deny
- If ANY policy selects a pod, only traffic allowed by policies is permitted
- If NO policy selects a pod, all traffic is allowed (default)
- Requires a CNI plugin that supports NetworkPolicies (Calico, Cilium, etc.)
Basic Structure
Section titled “Basic Structure”apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: my-policy namespace: defaultspec: podSelector: # Which pods this policy applies to matchLabels: app: my-app policyTypes: # What traffic types to control - Ingress # Incoming traffic - Egress # Outgoing traffic ingress: # Rules for incoming traffic - from: - podSelector: matchLabels: role: frontend egress: # Rules for outgoing traffic - to: - podSelector: matchLabels: role: databasePolicy Types
Section titled “Policy Types”Ingress (Incoming Traffic)
Section titled “Ingress (Incoming Traffic)”Control what can connect TO the selected pods:
spec: podSelector: matchLabels: app: backend policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: app: frontend ports: - protocol: TCP port: 8080Egress (Outgoing Traffic)
Section titled “Egress (Outgoing Traffic)”Control what the selected pods can connect TO:
spec: podSelector: matchLabels: app: frontend policyTypes: - Egress egress: - to: - podSelector: matchLabels: app: backend ports: - protocol: TCP port: 8080Selector Types
Section titled “Selector Types”podSelector
Section titled “podSelector”Select pods in the same namespace:
ingress:- from: - podSelector: matchLabels: role: frontendnamespaceSelector
Section titled “namespaceSelector”Select pods from specific namespaces:
ingress:- from: - namespaceSelector: matchLabels: env: productionCombined (AND Logic)
Section titled “Combined (AND Logic)”Pod must match both selectors:
ingress:- from: - namespaceSelector: matchLabels: env: production podSelector: # Same list item = AND matchLabels: role: frontendPause and predict: Look at the two YAML examples below — “Combined (AND Logic)” and “Separate Items (OR Logic).” The only difference is indentation. Can you explain what each one allows before reading the descriptions?
Separate Items (OR Logic)
Section titled “Separate Items (OR Logic)”Traffic allowed from either selector:
ingress:- from: - namespaceSelector: # First item matchLabels: env: production - podSelector: # Second item = OR matchLabels: role: frontendipBlock
Section titled “ipBlock”Select by IP range (typically external):
ingress:- from: - ipBlock: cidr: 10.0.0.0/8 except: - 10.0.1.0/24Visualization
Section titled “Visualization”┌─────────────────────────────────────────────────────────────┐│ NetworkPolicy Concepts │├─────────────────────────────────────────────────────────────┤│ ││ Default (No Policy): ││ ┌─────────┐ ┌─────────┐ ┌─────────┐ ││ │ Pod A │◄───►│ Pod B │◄───►│ Pod C │ ││ └─────────┘ └─────────┘ └─────────┘ ││ All traffic allowed ││ ││ With Policy (Pod B selected): ││ ┌─────────┐ ┌─────────┐ ┌─────────┐ ││ │ Pod A │────►│ Pod B │ │ Pod C │ ││ │(frontend)│ │(backend)│ │(other) │ ││ └─────────┘ └─────────┘ └─────────┘ ││ ✓ allowed X blocked ││ ││ Selector Types: ││ ┌──────────────────────────────────────────────────┐ ││ │ │ ││ │ podSelector: Same namespace pods │ ││ │ namespaceSelector: Pods from labeled namespaces │ ││ │ ipBlock: External IP ranges │ ││ │ │ ││ │ Combined in same from/to item = AND │ ││ │ Separate from/to items = OR │ ││ │ │ ││ └──────────────────────────────────────────────────┘ ││ │└─────────────────────────────────────────────────────────────┘Common Patterns
Section titled “Common Patterns”Default Deny All Ingress
Section titled “Default Deny All Ingress”Block all incoming traffic to pods in namespace:
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: default-deny-ingressspec: podSelector: {} # Empty = select all pods policyTypes: - Ingress # No ingress rules = deny allDefault Deny All Egress
Section titled “Default Deny All Egress”Block all outgoing traffic from pods in namespace:
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: default-deny-egressspec: podSelector: {} policyTypes: - Egress # No egress rules = deny allDefault Deny All
Section titled “Default Deny All”Block both directions:
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: default-deny-allspec: podSelector: {} policyTypes: - Ingress - EgressAllow All Ingress
Section titled “Allow All Ingress”Explicitly allow all (useful to override):
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-all-ingressspec: podSelector: {} policyTypes: - Ingress ingress: - {} # Empty rule = allow allStop and think: You apply a default-deny egress policy to a namespace. Suddenly, all your pods can’t resolve DNS names and Service connections fail. What did you forget to allow, and why is DNS so critical for Kubernetes networking?
Allow DNS Egress
Section titled “Allow DNS Egress”Essential when using default deny egress:
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-dnsspec: podSelector: {} policyTypes: - Egress egress: - to: - namespaceSelector: {} podSelector: matchLabels: k8s-app: kube-dns ports: - protocol: UDP port: 53Complete Example: Three-Tier App
Section titled “Complete Example: Three-Tier App”# Frontend: can receive from anywhere, can reach backendapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: frontend-policyspec: podSelector: matchLabels: tier: frontend policyTypes: - Ingress - Egress ingress: - {} # Allow all ingress egress: - to: - podSelector: matchLabels: tier: backend ports: - port: 8080---# Backend: only from frontend, can reach databaseapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: backend-policyspec: podSelector: matchLabels: tier: backend policyTypes: - Ingress - Egress ingress: - from: - podSelector: matchLabels: tier: frontend ports: - port: 8080 egress: - to: - podSelector: matchLabels: tier: database ports: - port: 5432---# Database: only from backendapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: database-policyspec: podSelector: matchLabels: tier: database policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: tier: backend ports: - port: 5432Quick Reference
Section titled “Quick Reference”# Create NetworkPolicy (must use YAML)k apply -f policy.yaml
# View NetworkPoliciesk get networkpolicyk get netpol
# Describe policyk describe netpol NAME
# Test connectivityk exec pod1 -- wget -qO- --timeout=2 pod2-svc:80
# Check if CNI supports NetworkPoliciesk get pods -n kube-system | grep -E 'calico|cilium|weave'Did You Know?
Section titled “Did You Know?”-
NetworkPolicies require a compatible CNI. Flannel doesn’t support them by default. Calico, Cilium, and Weave do.
-
Policies are additive, not subtractive. You can’t create a policy that denies specific traffic—you can only allow. “Deny” happens by selecting a pod without allowing traffic.
-
Empty podSelector
{}selects all pods in the namespace. -
When you specify ports in egress, you might also need to allow DNS (port 53 UDP) or pod name resolution won’t work.
Common Mistakes
Section titled “Common Mistakes”| Mistake | Why It Hurts | Solution |
|---|---|---|
| CNI doesn’t support NetworkPolicies | Policies created but ignored | Use Calico, Cilium, or Weave |
| Forgot DNS in egress deny | Pod can’t resolve names | Add egress rule for kube-dns |
| AND vs OR confusion | Wrong pods selected | Remember: same item=AND, different items=OR |
| Empty podSelector confusion | Selected all pods unexpectedly | {} means “all pods in namespace” |
| Forgot policyTypes | Policy doesn’t do what expected | Always specify Ingress and/or Egress |
-
After applying a default-deny ingress NetworkPolicy to the
productionnamespace, the backend pods can no longer receive traffic from the frontend pods in the same namespace. Both frontend and backend pods are correctly labeled. What do you need to create to restore communication while keeping the default deny in place?Answer
Create an additional NetworkPolicy that explicitly allows ingress to the backend pods from the frontend pods. The default-deny policy selects all pods and provides no ingress rules, blocking everything. Since NetworkPolicies are additive, you add a new policy that selects the backend pods (`podSelector: matchLabels: tier: backend`) and allows ingress from frontend pods (`from: - podSelector: matchLabels: tier: frontend`). Both policies apply simultaneously — the deny policy blocks all traffic by default, and the allow policy opens the specific path needed. You don't need to modify or delete the deny policy. -
A developer creates a NetworkPolicy with this
fromrule and is confused about what it allows. The policy has onefromitem containing bothnamespaceSelector: matchLabels: env: stagingandpodSelector: matchLabels: role: api. Does this allow traffic from ALL pods in staging namespaces OR onlyrole: apipods in staging namespaces?Answer
When `namespaceSelector` and `podSelector` are in the SAME `from` list item (same YAML block, same indentation level under a single dash), they combine with AND logic. This allows traffic only from pods labeled `role: api` that are in namespaces labeled `env: staging`. If they were separate items (each under its own dash), it would be OR logic — allowing traffic from any pod in staging namespaces OR any `role: api` pod in the local namespace. This AND vs OR distinction is one of the most common sources of NetworkPolicy bugs, and it hinges entirely on YAML indentation. -
You apply a default-deny egress NetworkPolicy to a namespace. Immediately, all pods lose the ability to connect to any Service by name, even Services within the same namespace. Connections by IP address still work. What is happening and how do you fix it?
Answer
DNS resolution is blocked. When pods connect to a Service by name (e.g., `http://my-service`), they first make a DNS query to kube-dns (CoreDNS) on UDP port 53. The default-deny egress policy blocks all outgoing traffic, including DNS queries. Connections by IP bypass DNS so they still work. Fix by adding an egress NetworkPolicy that allows UDP port 53 to the kube-dns pods: allow egress to `namespaceSelector: {}` with `podSelector: matchLabels: k8s-app: kube-dns` on port 53 UDP. This is so common that you should always pair a default-deny egress policy with a DNS allow policy. -
Your cluster uses Flannel as the CNI plugin. You create a NetworkPolicy to isolate your database pods, but when you test, any pod can still connect to the database. The NetworkPolicy YAML is correct and
kubectl get netpolshows it exists. What is wrong?Answer
Flannel does not support NetworkPolicies. NetworkPolicies are a Kubernetes API concept, but enforcement is handled by the CNI plugin. If the CNI doesn't support them, the policies are stored in the API server (so `kubectl get netpol` shows them) but completely ignored at the network level. You need a CNI that supports NetworkPolicies — Calico, Cilium, or Weave are the most common choices. Some teams run Calico alongside Flannel specifically for NetworkPolicy support. This is a critical detail because everything looks correct from the Kubernetes API perspective, but no enforcement happens at the network layer.
Hands-On Exercise
Section titled “Hands-On Exercise”Task: Implement network isolation for a simple application.
Setup:
# Create namespacek create ns netpol-demo
# Create podsk run frontend --image=nginx -n netpol-demo -l tier=frontendk run backend --image=nginx -n netpol-demo -l tier=backendk run database --image=nginx -n netpol-demo -l tier=database
# Wait for podsk wait --for=condition=Ready pod --all -n netpol-demo --timeout=60s
# Create servicesk expose pod frontend --port=80 -n netpol-demok expose pod backend --port=80 -n netpol-demok expose pod database --port=80 -n netpol-demoPart 1: Test Default Connectivity
# All pods can reach all podsk exec -n netpol-demo frontend -- wget -qO- --timeout=2 backend:80k exec -n netpol-demo backend -- wget -qO- --timeout=2 database:80k exec -n netpol-demo database -- wget -qO- --timeout=2 frontend:80# All should succeedPart 2: Apply Default Deny
cat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: default-deny namespace: netpol-demospec: podSelector: {} policyTypes: - IngressEOF
# Now test - all should fail (if CNI supports NetworkPolicies)k exec -n netpol-demo frontend -- wget -qO- --timeout=2 backend:80# Should timeoutPart 3: Allow Frontend to Backend
cat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: backend-allow-frontend namespace: netpol-demospec: podSelector: matchLabels: tier: backend policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: tier: frontend ports: - port: 80EOF
# Testk exec -n netpol-demo frontend -- wget -qO- --timeout=2 backend:80# Should succeed
k exec -n netpol-demo database -- wget -qO- --timeout=2 backend:80# Should failCleanup:
k delete ns netpol-demoPractice Drills
Section titled “Practice Drills”Drill 1: Default Deny Ingress (Target: 2 minutes)
Section titled “Drill 1: Default Deny Ingress (Target: 2 minutes)”k create ns drill1k run web --image=nginx -n drill1
cat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: deny-ingress namespace: drill1spec: podSelector: {} policyTypes: - IngressEOF
k get netpol -n drill1k delete ns drill1Drill 2: Allow Specific Pod (Target: 3 minutes)
Section titled “Drill 2: Allow Specific Pod (Target: 3 minutes)”k create ns drill2k run server --image=nginx -n drill2 -l role=serverk run client --image=nginx -n drill2 -l role=clientk expose pod server --port=80 -n drill2
cat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-client namespace: drill2spec: podSelector: matchLabels: role: server policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: role: client ports: - port: 80EOF
k describe netpol allow-client -n drill2k delete ns drill2Drill 3: Egress Policy (Target: 3 minutes)
Section titled “Drill 3: Egress Policy (Target: 3 minutes)”k create ns drill3k run app --image=nginx -n drill3 -l app=webk run db --image=nginx -n drill3 -l app=dbk expose pod db --port=80 -n drill3
cat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: app-egress namespace: drill3spec: podSelector: matchLabels: app: web policyTypes: - Egress egress: - to: - podSelector: matchLabels: app: db ports: - port: 80EOF
k get netpol -n drill3k delete ns drill3Drill 4: Namespace Selector (Target: 3 minutes)
Section titled “Drill 4: Namespace Selector (Target: 3 minutes)”k create ns drill4-sourcek create ns drill4-targetk label ns drill4-source env=trusted
k run target --image=nginx -n drill4-target -l app=targetk expose pod target --port=80 -n drill4-target
cat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: from-trusted namespace: drill4-targetspec: podSelector: matchLabels: app: target policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: env: trustedEOF
k describe netpol from-trusted -n drill4-targetk delete ns drill4-source drill4-targetDrill 5: Combined Selectors (AND) (Target: 3 minutes)
Section titled “Drill 5: Combined Selectors (AND) (Target: 3 minutes)”k create ns drill5k label ns drill5 env=prod
k run backend --image=nginx -n drill5 -l tier=backendk run frontend --image=nginx -n drill5 -l tier=frontend
cat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: combined-and namespace: drill5spec: podSelector: matchLabels: tier: backend policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: env: prod podSelector: matchLabels: tier: frontendEOF
k describe netpol combined-and -n drill5k delete ns drill5Drill 6: IP Block (Target: 3 minutes)
Section titled “Drill 6: IP Block (Target: 3 minutes)”k create ns drill6k run web --image=nginx -n drill6
cat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: ip-block namespace: drill6spec: podSelector: {} policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 10.0.0.0/8 except: - 10.0.1.0/24EOF
k describe netpol ip-block -n drill6k delete ns drill6Next Module
Section titled “Next Module”Part 5 Cumulative Quiz - Test your mastery of Services, Ingress, and NetworkPolicies.