Module 3.6: Network Policies
Complexity:
[MEDIUM]- Pod-level firewallingTime to Complete: 45-55 minutes
Prerequisites: Module 3.1 (Services), Module 2.1 (Pods)
What You’ll Be Able to Do
Section titled “What You’ll Be Able to Do”After this module, you will be able to:
- Write NetworkPolicy resources that restrict ingress and egress traffic between pods
- Debug connectivity blocked by NetworkPolicies using systematic label and selector analysis
- Design a network segmentation strategy for a multi-tier application (frontend, backend, database)
- Explain the default-deny pattern and why explicit allow rules are more secure than blacklists
Why This Module Matters
Section titled “Why This Module Matters”By default, Kubernetes allows all pods to communicate with all other pods—a flat network with no restrictions. Network Policies let you control this traffic, implementing microsegmentation for security. Without Network Policies, a compromised pod can freely communicate with every other pod in the cluster.
The CKA exam frequently tests Network Policies. You’ll need to create ingress/egress rules, understand selectors, and debug policy issues quickly.
The Apartment Building Analogy
Imagine a Kubernetes cluster as an apartment building where every apartment door is unlocked. Any tenant can walk into any other apartment. Network Policies are like installing locks on doors and giving keys only to specific people. You decide who can enter (ingress) and where tenants can go (egress).
What You’ll Learn
Section titled “What You’ll Learn”By the end of this module, you’ll be able to:
- Understand when pods are isolated by Network Policies
- Create ingress and egress rules
- Use pod, namespace, and IP block selectors
- Allow DNS traffic properly
- Debug Network Policy issues
Did You Know?
Section titled “Did You Know?”-
NetworkPolicy is just a spec: The API server accepts NetworkPolicy objects, but without a CNI that supports them (like Calico, Cilium, or Weave), they’re ignored.
-
Default deny is powerful: A single “deny all” policy instantly blocks all traffic to selected pods. This is a common security pattern.
-
Order doesn’t matter: Unlike traditional firewalls, NetworkPolicy rules are additive. If any policy allows traffic, it’s allowed. There’s no “deny” rule—just absence of “allow”.
Part 1: Network Policy Fundamentals
Section titled “Part 1: Network Policy Fundamentals”1.1 How Network Policies Work
Section titled “1.1 How Network Policies Work”┌────────────────────────────────────────────────────────────────┐│ Network Policy Flow ││ ││ Without NetworkPolicy: ││ ┌─────────────────────────────────────────────────────────┐ ││ │ All pods can talk to all pods (flat network) │ ││ │ │ ││ │ Pod A ◄────────────────────────────► Pod B │ ││ │ │ │ │ ││ │ │◄──────────────────────────────────►│ │ ││ │ │ Pod C │ │ ││ │ └──────────────────────────────────────►Pod D │ ││ └─────────────────────────────────────────────────────────┘ ││ ││ With NetworkPolicy selecting Pod B: ││ ┌─────────────────────────────────────────────────────────┐ ││ │ Pod B is now isolated (only allowed traffic permitted) │ ││ │ │ ││ │ Pod A ────────────────────────────X──► Pod B │ ││ │ │ │ │ ││ │ │◄──────────────────────────────────►│ │ ││ │ │ Pod C │ │ ││ │ └──────────────────────────────────────►Pod D │ ││ │ │ ││ │ (Pod B ingress blocked unless explicitly allowed) │ ││ └─────────────────────────────────────────────────────────┘ ││ │└────────────────────────────────────────────────────────────────┘1.2 Key Concepts
Section titled “1.2 Key Concepts”| Concept | Description |
|---|---|
| Ingress | Traffic coming INTO the pod |
| Egress | Traffic going OUT from the pod |
| podSelector | Which pods the policy applies to |
| Isolated pods | Pods selected by any NetworkPolicy |
| Additive rules | Multiple policies = union of all rules |
1.3 When Are Pods Isolated?
Section titled “1.3 When Are Pods Isolated?”A pod is isolated when:
- A NetworkPolicy selects it (via
spec.podSelector) - The policy type matches the traffic direction (ingress/egress)
Once isolated:
- Ingress isolated: Only traffic explicitly allowed by ingress rules is permitted
- Egress isolated: Only traffic explicitly allowed by egress rules is permitted
# This policy makes pods with app=web isolated for INGRESS# (they can still make outbound connections)apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: isolate-ingressspec: podSelector: matchLabels: app: web # Selects these pods policyTypes: - Ingress # Only ingress is affectedPause and predict: You create a NetworkPolicy that selects pods with label
app: webbut the policy has empty ingress rules (ingress: []). Can anything reach those pods? What if you had writteningress: [{}]instead — how does that single pair of curly braces change everything?
Part 2: Basic Network Policies
Section titled “Part 2: Basic Network Policies”2.1 Deny All Ingress (Default Deny)
Section titled “2.1 Deny All Ingress (Default Deny)”# Deny all incoming traffic to pods in namespaceapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: deny-all-ingress namespace: productionspec: podSelector: {} # Empty = select ALL pods policyTypes: - Ingress # No ingress rules = deny all ingress2.2 Deny All Egress
Section titled “2.2 Deny All Egress”# Deny all outgoing traffic from pods in namespaceapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: deny-all-egress namespace: productionspec: podSelector: {} # All pods policyTypes: - Egress # No egress rules = deny all egress2.3 Deny All (Both Directions)
Section titled “2.3 Deny All (Both Directions)”# Complete lockdownapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: deny-allspec: podSelector: {} policyTypes: - Ingress - Egress2.4 Allow All Ingress
Section titled “2.4 Allow All Ingress”# Explicitly allow all ingress (useful to override deny policies)apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-all-ingressspec: podSelector: {} policyTypes: - Ingress ingress: - {} # Empty rule = allow all2.5 Allow All Egress
Section titled “2.5 Allow All Egress”# Explicitly allow all egressapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-all-egressspec: podSelector: {} policyTypes: - Egress egress: - {} # Empty rule = allow allPart 3: Selective Policies
Section titled “Part 3: Selective Policies”3.1 Allow Ingress from Specific Pods
Section titled “3.1 Allow Ingress from Specific Pods”# Allow traffic from pods with label app=frontendapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-frontendspec: podSelector: matchLabels: app: backend # This policy applies to backend pods policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: app: frontend # Allow traffic from frontend pods┌────────────────────────────────────────────────────────────────┐│ Pod Selector Example ││ ││ ┌─────────────────┐ ┌─────────────────┐ ││ │ Pod │ │ Pod │ ││ │ app: frontend │────────►│ app: backend │ ││ │ │ ✓ │ │ ││ └─────────────────┘ └─────────────────┘ ││ ││ ┌─────────────────┐ ┌─────────────────┐ ││ │ Pod │ │ Pod │ ││ │ app: other │────X───►│ app: backend │ ││ │ │ ✗ │ │ ││ └─────────────────┘ └─────────────────┘ ││ │└────────────────────────────────────────────────────────────────┘3.2 Allow Ingress from Namespace
Section titled “3.2 Allow Ingress from Namespace”# Allow traffic from all pods in namespace "monitoring"apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-monitoringspec: podSelector: matchLabels: app: backend policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: name: monitoring # Namespace must have this labelImportant: Namespaces need labels! Add with:
Terminal window k label namespace monitoring name=monitoring
3.3 Allow Ingress from IP Block
Section titled “3.3 Allow Ingress from IP Block”# Allow traffic from specific IP rangesapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-externalspec: podSelector: matchLabels: app: web policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 192.168.1.0/24 # Allow this range except: - 192.168.1.100/32 # Except this IP3.4 Allow Ingress on Specific Ports
Section titled “3.4 Allow Ingress on Specific Ports”# Allow HTTP and HTTPS onlyapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-web-portsspec: podSelector: matchLabels: app: web policyTypes: - Ingress ingress: - from: - podSelector: {} # From any pod ports: - protocol: TCP port: 80 - protocol: TCP port: 443Part 4: Combining Selectors
Section titled “Part 4: Combining Selectors”Stop and think: Look at the two YAML snippets below in section 4.1. One allows traffic from frontend pods OR from the monitoring namespace. The other allows traffic only from frontend pods that are IN the monitoring namespace. The only difference is indentation. Can you spot which is which before reading the explanation?
4.1 AND vs OR Logic
Section titled “4.1 AND vs OR Logic”# OR logic: from frontend pods OR from monitoring namespaceingress:- from: - podSelector: matchLabels: app: frontend- from: - namespaceSelector: matchLabels: name: monitoring# AND logic: from frontend pods IN monitoring namespaceingress:- from: - podSelector: matchLabels: app: frontend namespaceSelector: matchLabels: name: monitoring┌────────────────────────────────────────────────────────────────┐│ Selector Logic ││ ││ Two separate "from" items = OR ││ - from: ││ - podSelector: {app: A} # Match A ││ - from: ││ - podSelector: {app: B} # OR match B ││ ││ Same "from" item = AND ││ - from: ││ - podSelector: {app: A} # Match A ││ namespaceSelector: {x: y} # AND in namespace with x=y ││ │└────────────────────────────────────────────────────────────────┘4.2 Complex Example
Section titled “4.2 Complex Example”apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: complex-policyspec: podSelector: matchLabels: app: api policyTypes: - Ingress - Egress ingress: # Rule 1: Allow from frontend in same namespace - from: - podSelector: matchLabels: app: frontend ports: - port: 8080 # Rule 2: Allow from any pod in monitoring namespace - from: - namespaceSelector: matchLabels: name: monitoring ports: - port: 9090 egress: # Rule 1: Allow to database pods - to: - podSelector: matchLabels: app: database ports: - port: 5432 # Rule 2: Allow DNS - to: - namespaceSelector: {} ports: - port: 53 protocol: UDP - port: 53 protocol: TCPWhat would happen if: You apply a deny-all egress policy to your backend pods but forget to add a DNS exception. The pods can still reach the database pod by IP, but
curl db-servicefails. Why does direct IP access work but service name resolution does not?
Part 5: Egress Policies
Section titled “Part 5: Egress Policies”5.1 Allow Egress to Specific Pods
Section titled “5.1 Allow Egress to Specific Pods”# Backend can only talk to databaseapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: backend-egressspec: podSelector: matchLabels: app: backend policyTypes: - Egress egress: - to: - podSelector: matchLabels: app: database ports: - port: 54325.2 Allow DNS (Critical!)
Section titled “5.2 Allow DNS (Critical!)”When restricting egress, you must allow DNS or pods can’t resolve service names:
# Allow DNS to kube-systemapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-dnsspec: podSelector: {} policyTypes: - Egress egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system ports: - port: 53 protocol: UDP - port: 53 protocol: TCP5.3 Allow External Traffic
Section titled “5.3 Allow External Traffic”# Allow egress to external IPsapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-externalspec: podSelector: matchLabels: app: web policyTypes: - Egress egress: - to: - ipBlock: cidr: 0.0.0.0/0 # All IPs except: - 10.0.0.0/8 # Except private ranges - 172.16.0.0/12 - 192.168.0.0/16Part 6: Debugging Network Policies
Section titled “Part 6: Debugging Network Policies”6.1 Debugging Workflow
Section titled “6.1 Debugging Workflow”Network Policy Issue? │ ├── Does CNI support NetworkPolicy? │ (Calico, Cilium, Weave = yes; Flannel = no) │ ├── kubectl get networkpolicy -n <namespace> │ (List policies affecting pods) │ ├── kubectl describe networkpolicy <name> │ (Check selectors and rules) │ ├── Check pod labels match │ kubectl get pods --show-labels │ ├── Check namespace labels (for namespaceSelector) │ kubectl get namespace --show-labels │ └── Test connectivity kubectl exec <pod> -- nc -zv <target> <port>6.2 Common Commands
Section titled “6.2 Common Commands”# List network policiesk get networkpolicyk get netpol # Short form
# Describe policyk describe networkpolicy <name>
# Check pod labelsk get pods --show-labels
# Check namespace labelsk get namespaces --show-labels
# Test connectivityk exec <pod> -- nc -zv <service> <port>k exec <pod> -- wget --spider --timeout=1 http://<service>k exec <pod> -- curl -s --max-time 1 http://<service>6.3 Common Issues
Section titled “6.3 Common Issues”| Symptom | Cause | Solution |
|---|---|---|
| Policy not enforced | CNI doesn’t support | Use Calico, Cilium, or Weave |
| Can’t resolve DNS | DNS egress blocked | Add egress rule for port 53 |
| Cross-namespace blocked | namespaceSelector wrong | Label namespaces, check selector |
| All traffic blocked | Empty podSelector in deny | Create allow rules for needed traffic |
| Pods can still communicate | Labels don’t match | Verify podSelector matches pod labels |
Part 7: Common Patterns
Section titled “Part 7: Common Patterns”7.1 Database Protection
Section titled “7.1 Database Protection”# Only allow backend pods to access databaseapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: db-protection namespace: productionspec: podSelector: matchLabels: app: database policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: app: backend ports: - port: 54327.2 Three-Tier Application
Section titled “7.2 Three-Tier Application”# Web tier - only from ingress controllerapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: web-policyspec: podSelector: matchLabels: tier: web ingress: - from: - namespaceSelector: matchLabels: name: ingress-nginx policyTypes: - Ingress---# App tier - only from web tierapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: app-policyspec: podSelector: matchLabels: tier: app ingress: - from: - podSelector: matchLabels: tier: web policyTypes: - Ingress---# DB tier - only from app tierapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: db-policyspec: podSelector: matchLabels: tier: db ingress: - from: - podSelector: matchLabels: tier: app ports: - port: 5432 policyTypes: - Ingress7.3 Namespace Isolation
Section titled “7.3 Namespace Isolation”# Default deny all, then allow within namespace onlyapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: namespace-isolationspec: podSelector: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} # Same namespace only egress: - to: - podSelector: {} # Same namespace only - to: # Plus DNS - namespaceSelector: {} ports: - port: 53 protocol: UDPCommon Mistakes
Section titled “Common Mistakes”| Mistake | Problem | Solution |
|---|---|---|
| Using unsupported CNI | Policies ignored | Switch to Calico, Cilium, or Weave |
| Forgetting DNS egress | Pods can’t resolve names | Add port 53 UDP/TCP egress |
| Unlabeled namespaces | namespaceSelector fails | Label namespaces first |
| Wrong selector logic | Too permissive/restrictive | Check AND vs OR (same from vs separate) |
| Empty ingress array | Blocks all ingress | Use ingress: [{}] to allow all |
-
Your security team wants to lock down a production namespace so that no pod can receive traffic unless explicitly allowed. You apply a deny-all ingress policy, but the monitoring team reports their Prometheus scraper can still reach pods. What could explain this?
Answer
The most likely cause is that the CNI plugin does not support NetworkPolicy enforcement. If the cluster uses Flannel (which does not implement NetworkPolicy), the policy is accepted by the API server but never enforced -- traffic flows freely regardless. Verify with `k get pods -n kube-system | grep -E "calico|cilium|weave"`. If using an unsupported CNI, you must switch to Calico, Cilium, or Weave for policy enforcement. Another possibility: the Prometheus scraper runs with `hostNetwork: true`, and some CNI implementations do not enforce policies on host-networked pods. -
You write a NetworkPolicy to allow your backend pods to talk only to the database. It works, but now the backend pods cannot resolve DNS names —
curl db-servicefails whilecurl 10.244.2.5(the DB pod IP) works fine. What did you forget, and how do you fix it without opening up all egress?Answer
The egress policy blocked DNS traffic (UDP/TCP port 53) to CoreDNS. Service name resolution requires the pod to send a DNS query to CoreDNS in kube-system, which the egress policy blocks. Add a DNS egress rule: allow egress to port 53 (both UDP and TCP) targeting the kube-system namespace with `namespaceSelector: {matchLabels: {kubernetes.io/metadata.name: kube-system}}`. This is the most common NetworkPolicy mistake -- any time you restrict egress, you must explicitly allow DNS or service discovery breaks. -
A colleague writes this NetworkPolicy and claims it allows traffic from frontend pods in the monitoring namespace only. But in testing, ALL pods in the monitoring namespace can reach the backend. Find the bug.
ingress:- from:- podSelector:matchLabels:app: frontend- namespaceSelector:matchLabels:name: monitoringAnswer
The bug is the OR vs AND logic. The two selectors are separate list items (note the two dashes under `from:`), which means OR: allow from pods with `app=frontend` (in any namespace) OR from any pod in the `monitoring` namespace. To make it AND (only frontend pods IN monitoring), combine them in a single list item by removing the second dash: `- podSelector: {matchLabels: {app: frontend}} \n namespaceSelector: {matchLabels: {name: monitoring}}`. This single-character indentation difference is one of the most common and dangerous NetworkPolicy mistakes. -
You are designing network policies for a three-tier app: web (receives external traffic), app (receives from web only), and database (receives from app only on port 5432). The web tier also needs to call an external payment API. Describe the policies you would create and in what order.
Answer
First, create a default-deny ingress policy for the namespace (`podSelector: {}`, no ingress rules). Then create three allow policies: (1) Web tier: allow ingress from the ingress controller's namespace using a `namespaceSelector`. (2) App tier: allow ingress from pods with `tier: web` on the app port using `podSelector`. (3) Database tier: allow ingress only from pods with `tier: app` on port 5432. For the web tier's external API access, add an egress policy: allow egress to the payment API's IP block using `ipBlock.cidr`, plus DNS egress (port 53) to kube-system. Order matters for implementation: apply the deny-all first, then the allow policies, so there is no window where traffic is unrestricted. -
After applying NetworkPolicies, a developer reports that inter-pod communication works in the
stagingnamespace but the same policies fail inproduction. The policies usenamespaceSelectorwithmatchLabels: {env: production}. What is the likely issue?Answer
The `production` namespace likely does not have the label `env: production`. Unlike pods (which inherit labels from their Deployment template), namespaces must be labeled manually. Kubernetes does not automatically label namespaces with their name. Run `k get namespace production --show-labels` to verify. Fix with `k label namespace production env=production`. Note that newer Kubernetes versions do auto-apply `kubernetes.io/metadata.name:`, so using that built-in label is more reliable than custom labels.
Hands-On Exercise
Section titled “Hands-On Exercise”Task: Implement network policies for a three-tier application.
Steps:
- Create test pods:
# Create pods with different rolesk run frontend --image=nginx --labels="tier=frontend"k run backend --image=nginx --labels="tier=backend"k run database --image=nginx --labels="tier=database"
# Wait for pods to be readyk wait --for=condition=ready pod/frontend pod/backend pod/database --timeout=60s- Verify default connectivity (everything should work):
BACKEND_IP=$(k get pod backend -o jsonpath='{.status.podIP}')k exec frontend -- wget --spider --timeout=1 http://$BACKEND_IP# Should succeed- Create deny-all policy:
cat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: deny-allspec: podSelector: {} policyTypes: - IngressEOF- Test connectivity (should fail if CNI supports it):
k exec frontend -- wget --spider --timeout=1 http://$BACKEND_IP# Should timeout (if CNI supports NetworkPolicy)- Allow frontend to backend:
cat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-frontend-to-backendspec: podSelector: matchLabels: tier: backend policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: tier: frontend ports: - port: 80EOF- Test again:
k exec frontend -- wget --spider --timeout=1 http://$BACKEND_IP# Should succeed now
# But database to backend should still failDATABASE_IP=$(k get pod database -o jsonpath='{.status.podIP}')k exec database -- wget --spider --timeout=1 http://$BACKEND_IP# Should fail- Allow backend to database:
cat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-backend-to-databasespec: podSelector: matchLabels: tier: database policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: tier: backend ports: - port: 80EOF- List all policies:
k get networkpolicyk describe networkpolicy- Cleanup:
k delete networkpolicy deny-all allow-frontend-to-backend allow-backend-to-databasek delete pod frontend backend databaseSuccess Criteria:
- Understand default-allow behavior without policies
- Can create deny-all policies
- Can create selective allow policies
- Understand pod selector matching
- Can debug policy issues
Practice Drills
Section titled “Practice Drills”Drill 1: Deny All Ingress (Target: 2 minutes)
Section titled “Drill 1: Deny All Ingress (Target: 2 minutes)”# Create podk run test-pod --image=nginx --labels="app=test"
# Create deny-all ingresscat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: deny-ingressspec: podSelector: matchLabels: app: test policyTypes: - IngressEOF
# Verifyk describe networkpolicy deny-ingress
# Cleanupk delete networkpolicy deny-ingressk delete pod test-podDrill 2: Allow from Specific Pod (Target: 3 minutes)
Section titled “Drill 2: Allow from Specific Pod (Target: 3 minutes)”# Create podsk run server --image=nginx --labels="role=server"k run client --image=nginx --labels="role=client"k run other --image=nginx --labels="role=other"
# Create policy allowing only clientcat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-clientspec: podSelector: matchLabels: role: server policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: role: client ports: - port: 80EOF
# Verify policyk describe networkpolicy allow-client
# Cleanupk delete networkpolicy allow-clientk delete pod server client otherDrill 3: Allow from Namespace (Target: 4 minutes)
Section titled “Drill 3: Allow from Namespace (Target: 4 minutes)”# Create namespace with labelk create namespace allowedk label namespace allowed name=allowed
# Create podsk run target --image=nginx --labels="app=target"k run source --image=nginx -n allowed
# Create policycat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-namespacespec: podSelector: matchLabels: app: target policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: name: allowedEOF
# Verifyk describe networkpolicy allow-namespace
# Cleanupk delete networkpolicy allow-namespacek delete pod targetk delete namespace allowedDrill 4: Egress with DNS (Target: 4 minutes)
Section titled “Drill 4: Egress with DNS (Target: 4 minutes)”# Create podk run egress-test --image=nginx --labels="app=egress"
# Create egress policy with DNScat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: egress-dnsspec: podSelector: matchLabels: app: egress policyTypes: - Egress egress: # Allow DNS - to: - namespaceSelector: {} ports: - port: 53 protocol: UDP - port: 53 protocol: TCP # Allow HTTPS - to: [] ports: - port: 443EOF
# Verifyk describe networkpolicy egress-dns
# Cleanupk delete networkpolicy egress-dnsk delete pod egress-testDrill 5: Port-Specific Ingress (Target: 3 minutes)
Section titled “Drill 5: Port-Specific Ingress (Target: 3 minutes)”# Create podk run web --image=nginx --labels="app=web"
# Allow only ports 80 and 443cat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: web-portsspec: podSelector: matchLabels: app: web policyTypes: - Ingress ingress: - ports: - port: 80 protocol: TCP - port: 443 protocol: TCPEOF
# Verifyk describe networkpolicy web-ports
# Cleanupk delete networkpolicy web-portsk delete pod webDrill 6: IP Block Policy (Target: 3 minutes)
Section titled “Drill 6: IP Block Policy (Target: 3 minutes)”# Create podk run ip-test --image=nginx --labels="app=ip-test"
# Create policy with IP blockcat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: ip-blockspec: podSelector: matchLabels: app: ip-test policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 10.0.0.0/8 except: - 10.0.1.0/24EOF
# Verifyk describe networkpolicy ip-block
# Cleanupk delete networkpolicy ip-blockk delete pod ip-testDrill 7: Combined AND Selector (Target: 4 minutes)
Section titled “Drill 7: Combined AND Selector (Target: 4 minutes)”# Create namespacek create namespace restrictedk label namespace restricted name=restricted
# Create podk run secure --image=nginx --labels="app=secure"
# Create policy with AND logiccat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: and-policyspec: podSelector: matchLabels: app: secure policyTypes: - Ingress ingress: - from: # AND: must be frontend pod IN restricted namespace - podSelector: matchLabels: role: frontend namespaceSelector: matchLabels: name: restrictedEOF
# Verifyk describe networkpolicy and-policy
# Cleanupk delete networkpolicy and-policyk delete pod securek delete namespace restrictedDrill 8: Challenge - Complete Network Isolation
Section titled “Drill 8: Challenge - Complete Network Isolation”Without looking at solutions:
- Create namespace
securewith labelzone=secure - Create pods:
app(label: tier=app),db(label: tier=db) - Create deny-all ingress policy
- Allow
appto receive traffic from any pod in cluster - Allow
dbto receive traffic only fromapppods, port 5432 - Verify with
kubectl describe - Cleanup everything
# YOUR TASK: Complete in under 7 minutesSolution
# 1. Create namespacek create namespace securek label namespace secure zone=secure
# 2. Create podsk run app -n secure --image=nginx --labels="tier=app"k run db -n secure --image=nginx --labels="tier=db"
# 3. Deny all ingresscat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: deny-all namespace: securespec: podSelector: {} policyTypes: - IngressEOF
# 4. Allow app from anywherecat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-app namespace: securespec: podSelector: matchLabels: tier: app policyTypes: - Ingress ingress: - from: - namespaceSelector: {}EOF
# 5. Allow db from app onlycat << 'EOF' | k apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-db namespace: securespec: podSelector: matchLabels: tier: db policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: tier: app ports: - port: 5432EOF
# 6. Verifyk get networkpolicy -n securek describe networkpolicy -n secure
# 7. Cleanupk delete namespace secureNext Module
Section titled “Next Module”Module 3.7: CNI & Cluster Networking - Understanding the Container Network Interface.