Skip to content

Module 3.6: Network Policies

Hands-On Lab Available
K8s Cluster advanced 45 min
Launch Lab ↗

Opens in Killercoda in a new tab

Complexity: [MEDIUM] - Pod-level firewalling

Time to Complete: 45-55 minutes

Prerequisites: Module 3.1 (Services), Module 2.1 (Pods)


After this module, you will be able to:

  • Write NetworkPolicy resources that restrict ingress and egress traffic between pods
  • Debug connectivity blocked by NetworkPolicies using systematic label and selector analysis
  • Design a network segmentation strategy for a multi-tier application (frontend, backend, database)
  • Explain the default-deny pattern and why explicit allow rules are more secure than blacklists

By default, Kubernetes allows all pods to communicate with all other pods—a flat network with no restrictions. Network Policies let you control this traffic, implementing microsegmentation for security. Without Network Policies, a compromised pod can freely communicate with every other pod in the cluster.

The CKA exam frequently tests Network Policies. You’ll need to create ingress/egress rules, understand selectors, and debug policy issues quickly.

The Apartment Building Analogy

Imagine a Kubernetes cluster as an apartment building where every apartment door is unlocked. Any tenant can walk into any other apartment. Network Policies are like installing locks on doors and giving keys only to specific people. You decide who can enter (ingress) and where tenants can go (egress).


By the end of this module, you’ll be able to:

  • Understand when pods are isolated by Network Policies
  • Create ingress and egress rules
  • Use pod, namespace, and IP block selectors
  • Allow DNS traffic properly
  • Debug Network Policy issues

  • NetworkPolicy is just a spec: The API server accepts NetworkPolicy objects, but without a CNI that supports them (like Calico, Cilium, or Weave), they’re ignored.

  • Default deny is powerful: A single “deny all” policy instantly blocks all traffic to selected pods. This is a common security pattern.

  • Order doesn’t matter: Unlike traditional firewalls, NetworkPolicy rules are additive. If any policy allows traffic, it’s allowed. There’s no “deny” rule—just absence of “allow”.


┌────────────────────────────────────────────────────────────────┐
│ Network Policy Flow │
│ │
│ Without NetworkPolicy: │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ All pods can talk to all pods (flat network) │ │
│ │ │ │
│ │ Pod A ◄────────────────────────────► Pod B │ │
│ │ │ │ │ │
│ │ │◄──────────────────────────────────►│ │ │
│ │ │ Pod C │ │ │
│ │ └──────────────────────────────────────►Pod D │ │
│ └─────────────────────────────────────────────────────────┘ │
│ │
│ With NetworkPolicy selecting Pod B: │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Pod B is now isolated (only allowed traffic permitted) │ │
│ │ │ │
│ │ Pod A ────────────────────────────X──► Pod B │ │
│ │ │ │ │ │
│ │ │◄──────────────────────────────────►│ │ │
│ │ │ Pod C │ │ │
│ │ └──────────────────────────────────────►Pod D │ │
│ │ │ │
│ │ (Pod B ingress blocked unless explicitly allowed) │ │
│ └─────────────────────────────────────────────────────────┘ │
│ │
└────────────────────────────────────────────────────────────────┘
ConceptDescription
IngressTraffic coming INTO the pod
EgressTraffic going OUT from the pod
podSelectorWhich pods the policy applies to
Isolated podsPods selected by any NetworkPolicy
Additive rulesMultiple policies = union of all rules

A pod is isolated when:

  1. A NetworkPolicy selects it (via spec.podSelector)
  2. The policy type matches the traffic direction (ingress/egress)

Once isolated:

  • Ingress isolated: Only traffic explicitly allowed by ingress rules is permitted
  • Egress isolated: Only traffic explicitly allowed by egress rules is permitted
# This policy makes pods with app=web isolated for INGRESS
# (they can still make outbound connections)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: isolate-ingress
spec:
podSelector:
matchLabels:
app: web # Selects these pods
policyTypes:
- Ingress # Only ingress is affected

Pause and predict: You create a NetworkPolicy that selects pods with label app: web but the policy has empty ingress rules (ingress: []). Can anything reach those pods? What if you had written ingress: [{}] instead — how does that single pair of curly braces change everything?

# Deny all incoming traffic to pods in namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
namespace: production
spec:
podSelector: {} # Empty = select ALL pods
policyTypes:
- Ingress # No ingress rules = deny all ingress
# Deny all outgoing traffic from pods in namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-egress
namespace: production
spec:
podSelector: {} # All pods
policyTypes:
- Egress # No egress rules = deny all egress
# Complete lockdown
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
# Explicitly allow all ingress (useful to override deny policies)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- {} # Empty rule = allow all
# Explicitly allow all egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- {} # Empty rule = allow all

# Allow traffic from pods with label app=frontend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend
spec:
podSelector:
matchLabels:
app: backend # This policy applies to backend pods
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend # Allow traffic from frontend pods
┌────────────────────────────────────────────────────────────────┐
│ Pod Selector Example │
│ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Pod │ │ Pod │ │
│ │ app: frontend │────────►│ app: backend │ │
│ │ │ ✓ │ │ │
│ └─────────────────┘ └─────────────────┘ │
│ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Pod │ │ Pod │ │
│ │ app: other │────X───►│ app: backend │ │
│ │ │ ✗ │ │ │
│ └─────────────────┘ └─────────────────┘ │
│ │
└────────────────────────────────────────────────────────────────┘
# Allow traffic from all pods in namespace "monitoring"
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-monitoring
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: monitoring # Namespace must have this label

Important: Namespaces need labels! Add with:

Terminal window
k label namespace monitoring name=monitoring
# Allow traffic from specific IP ranges
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-external
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 192.168.1.0/24 # Allow this range
except:
- 192.168.1.100/32 # Except this IP
# Allow HTTP and HTTPS only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-web-ports
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {} # From any pod
ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 443

Stop and think: Look at the two YAML snippets below in section 4.1. One allows traffic from frontend pods OR from the monitoring namespace. The other allows traffic only from frontend pods that are IN the monitoring namespace. The only difference is indentation. Can you spot which is which before reading the explanation?

# OR logic: from frontend pods OR from monitoring namespace
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
- from:
- namespaceSelector:
matchLabels:
name: monitoring
# AND logic: from frontend pods IN monitoring namespace
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
namespaceSelector:
matchLabels:
name: monitoring
┌────────────────────────────────────────────────────────────────┐
│ Selector Logic │
│ │
│ Two separate "from" items = OR │
│ - from: │
│ - podSelector: {app: A} # Match A │
│ - from: │
│ - podSelector: {app: B} # OR match B │
│ │
│ Same "from" item = AND │
│ - from: │
│ - podSelector: {app: A} # Match A │
│ namespaceSelector: {x: y} # AND in namespace with x=y │
│ │
└────────────────────────────────────────────────────────────────┘
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: complex-policy
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
- Egress
ingress:
# Rule 1: Allow from frontend in same namespace
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- port: 8080
# Rule 2: Allow from any pod in monitoring namespace
- from:
- namespaceSelector:
matchLabels:
name: monitoring
ports:
- port: 9090
egress:
# Rule 1: Allow to database pods
- to:
- podSelector:
matchLabels:
app: database
ports:
- port: 5432
# Rule 2: Allow DNS
- to:
- namespaceSelector: {}
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP

What would happen if: You apply a deny-all egress policy to your backend pods but forget to add a DNS exception. The pods can still reach the database pod by IP, but curl db-service fails. Why does direct IP access work but service name resolution does not?

# Backend can only talk to database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-egress
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- port: 5432

When restricting egress, you must allow DNS or pods can’t resolve service names:

# Allow DNS to kube-system
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
# Allow egress to external IPs
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-external
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0 # All IPs
except:
- 10.0.0.0/8 # Except private ranges
- 172.16.0.0/12
- 192.168.0.0/16

Network Policy Issue?
├── Does CNI support NetworkPolicy?
│ (Calico, Cilium, Weave = yes; Flannel = no)
├── kubectl get networkpolicy -n <namespace>
│ (List policies affecting pods)
├── kubectl describe networkpolicy <name>
│ (Check selectors and rules)
├── Check pod labels match
│ kubectl get pods --show-labels
├── Check namespace labels (for namespaceSelector)
│ kubectl get namespace --show-labels
└── Test connectivity
kubectl exec <pod> -- nc -zv <target> <port>
Terminal window
# List network policies
k get networkpolicy
k get netpol # Short form
# Describe policy
k describe networkpolicy <name>
# Check pod labels
k get pods --show-labels
# Check namespace labels
k get namespaces --show-labels
# Test connectivity
k exec <pod> -- nc -zv <service> <port>
k exec <pod> -- wget --spider --timeout=1 http://<service>
k exec <pod> -- curl -s --max-time 1 http://<service>
SymptomCauseSolution
Policy not enforcedCNI doesn’t supportUse Calico, Cilium, or Weave
Can’t resolve DNSDNS egress blockedAdd egress rule for port 53
Cross-namespace blockednamespaceSelector wrongLabel namespaces, check selector
All traffic blockedEmpty podSelector in denyCreate allow rules for needed traffic
Pods can still communicateLabels don’t matchVerify podSelector matches pod labels

# Only allow backend pods to access database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: db-protection
namespace: production
spec:
podSelector:
matchLabels:
app: database
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: backend
ports:
- port: 5432
# Web tier - only from ingress controller
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: web-policy
spec:
podSelector:
matchLabels:
tier: web
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
policyTypes:
- Ingress
---
# App tier - only from web tier
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app-policy
spec:
podSelector:
matchLabels:
tier: app
ingress:
- from:
- podSelector:
matchLabels:
tier: web
policyTypes:
- Ingress
---
# DB tier - only from app tier
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: db-policy
spec:
podSelector:
matchLabels:
tier: db
ingress:
- from:
- podSelector:
matchLabels:
tier: app
ports:
- port: 5432
policyTypes:
- Ingress
# Default deny all, then allow within namespace only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: namespace-isolation
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector: {} # Same namespace only
egress:
- to:
- podSelector: {} # Same namespace only
- to: # Plus DNS
- namespaceSelector: {}
ports:
- port: 53
protocol: UDP

MistakeProblemSolution
Using unsupported CNIPolicies ignoredSwitch to Calico, Cilium, or Weave
Forgetting DNS egressPods can’t resolve namesAdd port 53 UDP/TCP egress
Unlabeled namespacesnamespaceSelector failsLabel namespaces first
Wrong selector logicToo permissive/restrictiveCheck AND vs OR (same from vs separate)
Empty ingress arrayBlocks all ingressUse ingress: [{}] to allow all

  1. Your security team wants to lock down a production namespace so that no pod can receive traffic unless explicitly allowed. You apply a deny-all ingress policy, but the monitoring team reports their Prometheus scraper can still reach pods. What could explain this?

    Answer The most likely cause is that the CNI plugin does not support NetworkPolicy enforcement. If the cluster uses Flannel (which does not implement NetworkPolicy), the policy is accepted by the API server but never enforced -- traffic flows freely regardless. Verify with `k get pods -n kube-system | grep -E "calico|cilium|weave"`. If using an unsupported CNI, you must switch to Calico, Cilium, or Weave for policy enforcement. Another possibility: the Prometheus scraper runs with `hostNetwork: true`, and some CNI implementations do not enforce policies on host-networked pods.
  2. You write a NetworkPolicy to allow your backend pods to talk only to the database. It works, but now the backend pods cannot resolve DNS names — curl db-service fails while curl 10.244.2.5 (the DB pod IP) works fine. What did you forget, and how do you fix it without opening up all egress?

    Answer The egress policy blocked DNS traffic (UDP/TCP port 53) to CoreDNS. Service name resolution requires the pod to send a DNS query to CoreDNS in kube-system, which the egress policy blocks. Add a DNS egress rule: allow egress to port 53 (both UDP and TCP) targeting the kube-system namespace with `namespaceSelector: {matchLabels: {kubernetes.io/metadata.name: kube-system}}`. This is the most common NetworkPolicy mistake -- any time you restrict egress, you must explicitly allow DNS or service discovery breaks.
  3. A colleague writes this NetworkPolicy and claims it allows traffic from frontend pods in the monitoring namespace only. But in testing, ALL pods in the monitoring namespace can reach the backend. Find the bug.

    ingress:
    - from:
    - podSelector:
    matchLabels:
    app: frontend
    - namespaceSelector:
    matchLabels:
    name: monitoring
    Answer The bug is the OR vs AND logic. The two selectors are separate list items (note the two dashes under `from:`), which means OR: allow from pods with `app=frontend` (in any namespace) OR from any pod in the `monitoring` namespace. To make it AND (only frontend pods IN monitoring), combine them in a single list item by removing the second dash: `- podSelector: {matchLabels: {app: frontend}} \n namespaceSelector: {matchLabels: {name: monitoring}}`. This single-character indentation difference is one of the most common and dangerous NetworkPolicy mistakes.
  4. You are designing network policies for a three-tier app: web (receives external traffic), app (receives from web only), and database (receives from app only on port 5432). The web tier also needs to call an external payment API. Describe the policies you would create and in what order.

    Answer First, create a default-deny ingress policy for the namespace (`podSelector: {}`, no ingress rules). Then create three allow policies: (1) Web tier: allow ingress from the ingress controller's namespace using a `namespaceSelector`. (2) App tier: allow ingress from pods with `tier: web` on the app port using `podSelector`. (3) Database tier: allow ingress only from pods with `tier: app` on port 5432. For the web tier's external API access, add an egress policy: allow egress to the payment API's IP block using `ipBlock.cidr`, plus DNS egress (port 53) to kube-system. Order matters for implementation: apply the deny-all first, then the allow policies, so there is no window where traffic is unrestricted.
  5. After applying NetworkPolicies, a developer reports that inter-pod communication works in the staging namespace but the same policies fail in production. The policies use namespaceSelector with matchLabels: {env: production}. What is the likely issue?

    Answer The `production` namespace likely does not have the label `env: production`. Unlike pods (which inherit labels from their Deployment template), namespaces must be labeled manually. Kubernetes does not automatically label namespaces with their name. Run `k get namespace production --show-labels` to verify. Fix with `k label namespace production env=production`. Note that newer Kubernetes versions do auto-apply `kubernetes.io/metadata.name: `, so using that built-in label is more reliable than custom labels.

Task: Implement network policies for a three-tier application.

Steps:

  1. Create test pods:
Terminal window
# Create pods with different roles
k run frontend --image=nginx --labels="tier=frontend"
k run backend --image=nginx --labels="tier=backend"
k run database --image=nginx --labels="tier=database"
# Wait for pods to be ready
k wait --for=condition=ready pod/frontend pod/backend pod/database --timeout=60s
  1. Verify default connectivity (everything should work):
Terminal window
BACKEND_IP=$(k get pod backend -o jsonpath='{.status.podIP}')
k exec frontend -- wget --spider --timeout=1 http://$BACKEND_IP
# Should succeed
  1. Create deny-all policy:
Terminal window
cat << 'EOF' | k apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
EOF
  1. Test connectivity (should fail if CNI supports it):
Terminal window
k exec frontend -- wget --spider --timeout=1 http://$BACKEND_IP
# Should timeout (if CNI supports NetworkPolicy)
  1. Allow frontend to backend:
Terminal window
cat << 'EOF' | k apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
spec:
podSelector:
matchLabels:
tier: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
ports:
- port: 80
EOF
  1. Test again:
Terminal window
k exec frontend -- wget --spider --timeout=1 http://$BACKEND_IP
# Should succeed now
# But database to backend should still fail
DATABASE_IP=$(k get pod database -o jsonpath='{.status.podIP}')
k exec database -- wget --spider --timeout=1 http://$BACKEND_IP
# Should fail
  1. Allow backend to database:
Terminal window
cat << 'EOF' | k apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-backend-to-database
spec:
podSelector:
matchLabels:
tier: database
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
tier: backend
ports:
- port: 80
EOF
  1. List all policies:
Terminal window
k get networkpolicy
k describe networkpolicy
  1. Cleanup:
Terminal window
k delete networkpolicy deny-all allow-frontend-to-backend allow-backend-to-database
k delete pod frontend backend database

Success Criteria:

  • Understand default-allow behavior without policies
  • Can create deny-all policies
  • Can create selective allow policies
  • Understand pod selector matching
  • Can debug policy issues

Drill 1: Deny All Ingress (Target: 2 minutes)

Section titled “Drill 1: Deny All Ingress (Target: 2 minutes)”
Terminal window
# Create pod
k run test-pod --image=nginx --labels="app=test"
# Create deny-all ingress
cat << 'EOF' | k apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-ingress
spec:
podSelector:
matchLabels:
app: test
policyTypes:
- Ingress
EOF
# Verify
k describe networkpolicy deny-ingress
# Cleanup
k delete networkpolicy deny-ingress
k delete pod test-pod

Drill 2: Allow from Specific Pod (Target: 3 minutes)

Section titled “Drill 2: Allow from Specific Pod (Target: 3 minutes)”
Terminal window
# Create pods
k run server --image=nginx --labels="role=server"
k run client --image=nginx --labels="role=client"
k run other --image=nginx --labels="role=other"
# Create policy allowing only client
cat << 'EOF' | k apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-client
spec:
podSelector:
matchLabels:
role: server
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
role: client
ports:
- port: 80
EOF
# Verify policy
k describe networkpolicy allow-client
# Cleanup
k delete networkpolicy allow-client
k delete pod server client other

Drill 3: Allow from Namespace (Target: 4 minutes)

Section titled “Drill 3: Allow from Namespace (Target: 4 minutes)”
Terminal window
# Create namespace with label
k create namespace allowed
k label namespace allowed name=allowed
# Create pods
k run target --image=nginx --labels="app=target"
k run source --image=nginx -n allowed
# Create policy
cat << 'EOF' | k apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-namespace
spec:
podSelector:
matchLabels:
app: target
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: allowed
EOF
# Verify
k describe networkpolicy allow-namespace
# Cleanup
k delete networkpolicy allow-namespace
k delete pod target
k delete namespace allowed

Drill 4: Egress with DNS (Target: 4 minutes)

Section titled “Drill 4: Egress with DNS (Target: 4 minutes)”
Terminal window
# Create pod
k run egress-test --image=nginx --labels="app=egress"
# Create egress policy with DNS
cat << 'EOF' | k apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: egress-dns
spec:
podSelector:
matchLabels:
app: egress
policyTypes:
- Egress
egress:
# Allow DNS
- to:
- namespaceSelector: {}
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
# Allow HTTPS
- to: []
ports:
- port: 443
EOF
# Verify
k describe networkpolicy egress-dns
# Cleanup
k delete networkpolicy egress-dns
k delete pod egress-test

Drill 5: Port-Specific Ingress (Target: 3 minutes)

Section titled “Drill 5: Port-Specific Ingress (Target: 3 minutes)”
Terminal window
# Create pod
k run web --image=nginx --labels="app=web"
# Allow only ports 80 and 443
cat << 'EOF' | k apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: web-ports
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Ingress
ingress:
- ports:
- port: 80
protocol: TCP
- port: 443
protocol: TCP
EOF
# Verify
k describe networkpolicy web-ports
# Cleanup
k delete networkpolicy web-ports
k delete pod web

Drill 6: IP Block Policy (Target: 3 minutes)

Section titled “Drill 6: IP Block Policy (Target: 3 minutes)”
Terminal window
# Create pod
k run ip-test --image=nginx --labels="app=ip-test"
# Create policy with IP block
cat << 'EOF' | k apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ip-block
spec:
podSelector:
matchLabels:
app: ip-test
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 10.0.0.0/8
except:
- 10.0.1.0/24
EOF
# Verify
k describe networkpolicy ip-block
# Cleanup
k delete networkpolicy ip-block
k delete pod ip-test

Drill 7: Combined AND Selector (Target: 4 minutes)

Section titled “Drill 7: Combined AND Selector (Target: 4 minutes)”
Terminal window
# Create namespace
k create namespace restricted
k label namespace restricted name=restricted
# Create pod
k run secure --image=nginx --labels="app=secure"
# Create policy with AND logic
cat << 'EOF' | k apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: and-policy
spec:
podSelector:
matchLabels:
app: secure
policyTypes:
- Ingress
ingress:
- from:
# AND: must be frontend pod IN restricted namespace
- podSelector:
matchLabels:
role: frontend
namespaceSelector:
matchLabels:
name: restricted
EOF
# Verify
k describe networkpolicy and-policy
# Cleanup
k delete networkpolicy and-policy
k delete pod secure
k delete namespace restricted

Drill 8: Challenge - Complete Network Isolation

Section titled “Drill 8: Challenge - Complete Network Isolation”

Without looking at solutions:

  1. Create namespace secure with label zone=secure
  2. Create pods: app (label: tier=app), db (label: tier=db)
  3. Create deny-all ingress policy
  4. Allow app to receive traffic from any pod in cluster
  5. Allow db to receive traffic only from app pods, port 5432
  6. Verify with kubectl describe
  7. Cleanup everything
Terminal window
# YOUR TASK: Complete in under 7 minutes
Solution
Terminal window
# 1. Create namespace
k create namespace secure
k label namespace secure zone=secure
# 2. Create pods
k run app -n secure --image=nginx --labels="tier=app"
k run db -n secure --image=nginx --labels="tier=db"
# 3. Deny all ingress
cat << 'EOF' | k apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: secure
spec:
podSelector: {}
policyTypes:
- Ingress
EOF
# 4. Allow app from anywhere
cat << 'EOF' | k apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-app
namespace: secure
spec:
podSelector:
matchLabels:
tier: app
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector: {}
EOF
# 5. Allow db from app only
cat << 'EOF' | k apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-db
namespace: secure
spec:
podSelector:
matchLabels:
tier: db
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
tier: app
ports:
- port: 5432
EOF
# 6. Verify
k get networkpolicy -n secure
k describe networkpolicy -n secure
# 7. Cleanup
k delete namespace secure

Module 3.7: CNI & Cluster Networking - Understanding the Container Network Interface.