Module 1.4: Node Metadata Protection
Complexity:
[MEDIUM]- Cloud-specific security critical skillTime to Complete: 30-35 minutes
Prerequisites: Module 1.1 (Network Policies), understanding of cloud providers
What You’ll Be Able to Do
Section titled “What You’ll Be Able to Do”After completing this module, you will be able to:
- Create NetworkPolicies that block pod access to cloud metadata endpoints
- Audit cluster workloads for metadata service exposure risks
- Implement IMDS v2 enforcement and metadata service restrictions on cloud providers
- Trace privilege escalation paths from metadata credentials to cloud resource access
Why This Module Matters
Section titled “Why This Module Matters”Cloud provider metadata services (like AWS’s 169.254.169.254) expose sensitive information: IAM credentials, instance identity, and configuration data. A compromised pod can query this endpoint and potentially escalate privileges or access cloud resources.
This is a favorite attack vector. The 2019 Capital One breach exploited exactly this vulnerability.
The Metadata Attack
Section titled “The Metadata Attack”┌─────────────────────────────────────────────────────────────┐│ METADATA SERVICE ATTACK VECTOR │├─────────────────────────────────────────────────────────────┤│ ││ ┌─────────────────┐ ││ │ Compromised │ ││ │ Application │ ││ │ Pod │ ││ └────────┬────────┘ ││ │ ││ │ curl http://169.254.169.254/latest/meta-data/ ││ │ ││ ▼ ││ ┌─────────────────────────────────────────────────────┐ ││ │ METADATA SERVICE │ ││ │ │ ││ │ Returns: │ ││ │ • Instance ID │ ││ │ • Private IP │ ││ │ • IAM role credentials │ ││ │ • User data (may contain secrets!) │ ││ │ • VPC configuration │ ││ │ │ ││ └─────────────────────────────────────────────────────┘ ││ ││ Impact: ││ ⚠️ Attacker gets temporary AWS credentials ││ ⚠️ Can access S3 buckets, databases, etc. ││ ⚠️ Lateral movement through cloud resources ││ │└─────────────────────────────────────────────────────────────┘Metadata Endpoints by Provider
Section titled “Metadata Endpoints by Provider”| Cloud Provider | Metadata Endpoint | Credential Path |
|---|---|---|
| AWS | 169.254.169.254 | /latest/meta-data/iam/security-credentials/ |
| GCP | 169.254.169.254 | /computeMetadata/v1/ |
| Azure | 169.254.169.254 | /metadata/identity/oauth2/token |
| DigitalOcean | 169.254.169.254 | /metadata/v1/ |
All use the same IP: 169.254.169.254 (link-local address)
Stop and think: An attacker compromises an application pod and runs
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/. They get temporary AWS credentials with S3 read access. Trace the full attack path: what can they do next, and how far can they go?
Protection Method 1: NetworkPolicy
Section titled “Protection Method 1: NetworkPolicy”Block egress to the metadata IP using NetworkPolicy:
# Block access to metadata serviceapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: block-metadata namespace: productionspec: podSelector: {} # All pods in namespace policyTypes: - Egress egress: # Allow all EXCEPT metadata - to: - ipBlock: cidr: 0.0.0.0/0 except: - 169.254.169.254/32Allow DNS with Metadata Block
Section titled “Allow DNS with Metadata Block”# More complete: block metadata but allow DNSapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: deny-metadata-allow-dns namespace: productionspec: podSelector: {} policyTypes: - Egress egress: # Allow DNS - to: - namespaceSelector: {} podSelector: matchLabels: k8s-app: kube-dns ports: - port: 53 protocol: UDP - port: 53 protocol: TCP # Allow all other traffic except metadata - to: - ipBlock: cidr: 0.0.0.0/0 except: - 169.254.169.254/32Protection Method 2: iptables on Nodes
Section titled “Protection Method 2: iptables on Nodes”Configure iptables rules on each node to block metadata access:
# Block metadata access from pods (run on each node)iptables -A OUTPUT -d 169.254.169.254 -j DROP
# Or more specifically, block from pod networkiptables -I FORWARD -s 10.244.0.0/16 -d 169.254.169.254 -j DROP
# Make persistent (varies by OS)iptables-save > /etc/iptables/rules.v4DaemonSet for iptables Rules
Section titled “DaemonSet for iptables Rules”apiVersion: apps/v1kind: DaemonSetmetadata: name: metadata-blocker namespace: kube-systemspec: selector: matchLabels: app: metadata-blocker template: metadata: labels: app: metadata-blocker spec: hostNetwork: true hostPID: true containers: - name: blocker image: alpine command: - /bin/sh - -c - | apk add iptables iptables -C FORWARD -d 169.254.169.254 -j DROP 2>/dev/null || \ iptables -I FORWARD -d 169.254.169.254 -j DROP sleep infinity securityContext: privileged: true capabilities: add: ["NET_ADMIN"] tolerations: - operator: "Exists"What would happen if: You set
--http-put-response-hop-limit 1on your EC2 instances with IMDSv2. A pod running withhostNetwork: truetries to access the metadata service. Does the hop limit protect you? Why or why not?
Protection Method 3: Cloud Provider Features
Section titled “Protection Method 3: Cloud Provider Features”AWS IMDSv2 (Recommended)
Section titled “AWS IMDSv2 (Recommended)”AWS Instance Metadata Service v2 requires a session token, making direct pod access harder:
# IMDSv2 requires PUT request first to get tokenTOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" \ -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
# Then use token in subsequent requestscurl -H "X-aws-ec2-metadata-token: $TOKEN" \ http://169.254.169.254/latest/meta-data/Configure nodes to require IMDSv2:
# AWS CLI to enforce IMDSv2 on instanceaws ec2 modify-instance-metadata-options \ --instance-id i-1234567890abcdef0 \ --http-tokens required \ --http-put-response-hop-limit 1GCP Metadata Concealment
Section titled “GCP Metadata Concealment”# Enable metadata concealment on GKE node poolgcloud container node-pools update POOL_NAME \ --cluster=CLUSTER_NAME \ --workload-metadata=GKE_METADATAAzure Instance Metadata Service (IMDS)
Section titled “Azure Instance Metadata Service (IMDS)”Azure requires specific headers:
# Azure IMDS requires Metadata headercurl -H "Metadata:true" \ "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01"Testing Metadata Access
Section titled “Testing Metadata Access”Verify Pod Can’t Access Metadata
Section titled “Verify Pod Can’t Access Metadata”# Create test podkubectl run test-pod --image=curlimages/curl --rm -it --restart=Never -- \ curl -s --connect-timeout 2 http://169.254.169.254/latest/meta-data/
# Expected: Connection timeout or refused# If you see instance metadata, protection isn't working!Check NetworkPolicy is Applied
Section titled “Check NetworkPolicy is Applied”# List network policieskubectl get networkpolicies -n production
# Describe specific policykubectl describe networkpolicy block-metadata -n production
# Check if pod is selected by policykubectl get pod test-pod -n production --show-labelsComplete Security Example
Section titled “Complete Security Example”# Apply to every namespace that runs workloads---apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: default-deny-metadata namespace: defaultspec: podSelector: {} policyTypes: - Egress egress: # Allow DNS resolution - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system ports: - port: 53 protocol: UDP - port: 53 protocol: TCP # Allow cluster internal communication - to: - ipBlock: cidr: 10.0.0.0/8 # Allow external but block metadata - to: - ipBlock: cidr: 0.0.0.0/0 except: - 169.254.0.0/16 # Block entire link-local rangeReal Exam Scenarios
Section titled “Real Exam Scenarios”Scenario 1: Block Metadata Access for Namespace
Section titled “Scenario 1: Block Metadata Access for Namespace”# Create NetworkPolicy to block metadatacat <<EOF | kubectl apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: block-cloud-metadata namespace: productionspec: podSelector: {} policyTypes: - Egress egress: - to: - ipBlock: cidr: 0.0.0.0/0 except: - 169.254.169.254/32EOF
# Verifykubectl get networkpolicy block-cloud-metadata -n productionScenario 2: Test and Verify Block
Section titled “Scenario 2: Test and Verify Block”# Create test podkubectl run metadata-test --image=curlimages/curl -n production --rm -it --restart=Never -- \ curl -s --connect-timeout 3 http://169.254.169.254/latest/meta-data/ || echo "BLOCKED (expected)"Scenario 3: Allow Specific Pod Access
Section titled “Scenario 3: Allow Specific Pod Access”# Most pods blocked, but monitoring pod needs metadataapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-monitoring-metadata namespace: monitoringspec: podSelector: matchLabels: app: cloud-monitor policyTypes: - Egress egress: - to: - ipBlock: cidr: 0.0.0.0/0 # All traffic including metadataPause and predict: You block metadata access for the
productionnamespace with a NetworkPolicy. But you don’t apply it tokube-system. Why might this be intentional, and what risk does it introduce?
Defense in Depth
Section titled “Defense in Depth”┌─────────────────────────────────────────────────────────────┐│ METADATA PROTECTION LAYERS │├─────────────────────────────────────────────────────────────┤│ ││ Layer 1: NetworkPolicy ││ └── Block egress to 169.254.169.254 ││ ││ Layer 2: Cloud Provider IMDSv2 ││ └── Require session tokens ││ ││ Layer 3: Node-level iptables ││ └── Block at network level ││ ││ Layer 4: Pod Security ││ └── Restrict host networking ││ ││ Layer 5: Minimal IAM ││ └── Node roles with least privilege ││ ││ Best practice: Use MULTIPLE layers ││ │└─────────────────────────────────────────────────────────────┘Did You Know?
Section titled “Did You Know?”-
The 2019 Capital One breach exposed 100 million customer records through SSRF to the metadata service. The attacker obtained IAM credentials and accessed S3 buckets.
-
169.254.0.0/16 is link-local. It’s reserved for local network communication and never routed on the internet. Cloud providers use it for metadata because it’s accessible from any instance without routing.
-
Kubernetes itself uses metadata on cloud providers for node information. Blocking system components from metadata can break cluster functionality.
-
AWS IMDSv2 with hop limit 1 prevents containers from reaching metadata because the request goes through multiple network hops (container → node → metadata service).
Common Mistakes
Section titled “Common Mistakes”| Mistake | Why It Hurts | Solution |
|---|---|---|
| Forgetting DNS with egress policy | Pods can’t resolve names | Always allow DNS egress |
| Blocking metadata for kube-system | Breaks cloud integrations | Exempt system namespaces carefully |
| Only using NetworkPolicy | Not all CNIs enforce it | Use multiple protection layers |
| Testing from wrong namespace | Policy not applied there | Test from namespace with policy |
| Blocking entire link-local range | May break other services | Start with just 169.254.169.254/32 |
-
A penetration tester reports they obtained temporary AWS credentials from inside a pod by running
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/node-role. Using those credentials, they listed all S3 buckets in the account. What is the IP they targeted, and what two layers of defense would have prevented this?Answer
The IP 169.254.169.254 is the cloud metadata service link-local address, used by all major cloud providers (AWS, GCP, Azure). Two layers of defense: (1) A NetworkPolicy with egress rules using `ipBlock` with `except: [169.254.169.254/32]` to block pods from reaching the metadata service at the network level. (2) AWS IMDSv2 enforcement with `--http-tokens required` and `--http-put-response-hop-limit 1` -- this requires a session token that containers can't obtain because their requests traverse multiple network hops. Defense in depth means using both. -
You apply a metadata-blocking NetworkPolicy to the
productionnamespace. The next day, the cloud provider’s node autoscaler stops working. Investigation reveals a system pod inkube-systemneeds metadata access to function. How do you fix this without compromising production security?Answer
Don't apply the metadata-blocking NetworkPolicy to `kube-system` -- system components like cloud controller managers, node autoscalers, and CSI drivers legitimately need metadata access to interact with cloud APIs. Apply metadata blocking only to workload namespaces (`production`, `staging`, etc.) and leave system namespaces unblocked. For additional security on system namespaces, use IMDSv2 enforcement and ensure node IAM roles follow least privilege. This is an intentional trade-off: system components need metadata, application pods don't. -
Your cluster runs on AWS with IMDSv2 enforced (
--http-tokens required,--http-put-response-hop-limit 1). A security engineer argues that NetworkPolicies for metadata blocking are now redundant. Is she correct?Answer
She is partially correct but not entirely. IMDSv2 with hop limit 1 prevents most container-based metadata attacks because pod network traffic traverses multiple hops. However, pods with `hostNetwork: true` share the node's network namespace and can access metadata as if they were the node itself (only 1 hop). Also, IMDSv2 is AWS-specific -- if workloads move to GCP or Azure, you lose that protection. NetworkPolicies provide cloud-agnostic defense and catch edge cases. Best practice is defense in depth: use both IMDSv2 AND NetworkPolicies. -
You write a NetworkPolicy to block metadata but forget to include a DNS egress rule. Your application pods start failing with “could not resolve host” errors even though they never accessed the metadata service. Explain the connection between metadata blocking and DNS, and write the fix.
Answer
If you specify `policyTypes: [Egress]` in a NetworkPolicy, all egress traffic not explicitly allowed is denied. This includes DNS queries to kube-dns (UDP port 53). Even though DNS has nothing to do with metadata, the egress policy blocks ALL traffic except what you whitelist. The fix is to add a DNS egress rule: allow UDP/TCP port 53 to pods labeled `k8s-app: kube-dns` in any namespace. A complete metadata-blocking policy needs both the DNS allow rule AND the `ipBlock` with `except: [169.254.169.254/32]` for all other traffic.
Hands-On Exercise
Section titled “Hands-On Exercise”Task: Block metadata access and verify protection.
# Setup namespacekubectl create namespace metadata-test
# Step 1: Verify metadata is accessible (before protection)kubectl run check-before --image=curlimages/curl -n metadata-test --rm -it --restart=Never -- \ curl -s --connect-timeout 3 http://169.254.169.254/ && echo "ACCESSIBLE" || echo "BLOCKED"
# Note: In non-cloud environments, you'll see "BLOCKED" already
# Step 2: Apply metadata blocking NetworkPolicycat <<EOF | kubectl apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: block-metadata namespace: metadata-testspec: podSelector: {} policyTypes: - Egress egress: # Allow DNS - to: [] ports: - port: 53 protocol: UDP # Allow all except metadata - to: - ipBlock: cidr: 0.0.0.0/0 except: - 169.254.169.254/32EOF
# Step 3: Verify policy existskubectl get networkpolicy -n metadata-testkubectl describe networkpolicy block-metadata -n metadata-test
# Step 4: Test metadata is blockedkubectl run check-after --image=curlimages/curl -n metadata-test --rm -it --restart=Never -- \ curl -s --connect-timeout 3 http://169.254.169.254/ && echo "ACCESSIBLE" || echo "BLOCKED"
# Step 5: Verify other egress still workskubectl run check-external --image=curlimages/curl -n metadata-test --rm -it --restart=Never -- \ curl -s --connect-timeout 3 https://kubernetes.io -o /dev/null -w "%{http_code}" && echo " OK"
# Cleanupkubectl delete namespace metadata-testSuccess criteria: Metadata IP is blocked but external access works.
Summary
Section titled “Summary”Metadata Service Risk:
- Exposes IAM credentials and instance data
- Accessible from any pod by default
- Major attack vector (Capital One breach)
Protection Methods:
- NetworkPolicy blocking 169.254.169.254
- Cloud provider IMDSv2 enforcement
- Node-level iptables rules
- Pod Security (no hostNetwork)
Best Practices:
- Apply protection to all workload namespaces
- Remember to allow DNS egress
- Use multiple protection layers
- Test that blocks are effective
Exam Tips:
- Know how to write the NetworkPolicy from memory
- Understand ipBlock with except syntax
- Remember DNS is UDP port 53
Next Module
Section titled “Next Module”Module 1.5: GUI Security - Securing Kubernetes Dashboard and web UIs.