Module 3.2: Endpoints & EndpointSlices
Complexity:
[MEDIUM]- Understanding service mechanicsTime to Complete: 30-40 minutes
Prerequisites: Module 3.1 (Services)
What You’ll Be Able to Do
Section titled “What You’ll Be Able to Do”After this module, you will be able to:
- Explain how Endpoints and EndpointSlices connect Services to Pods
- Debug “no endpoints” errors by checking label selectors, pod readiness, and endpoint status
- Create manual Endpoints for external services that live outside the cluster
- Diagnose endpoint churn and its impact on service stability
Why This Module Matters
Section titled “Why This Module Matters”When you create a Service, Kubernetes automatically creates an Endpoints object that tracks which pod IPs should receive traffic. Understanding endpoints is crucial for debugging service issues—when a service has no endpoints, traffic goes nowhere.
The CKA exam tests your ability to troubleshoot services, and endpoint inspection is your primary debugging tool. You’ll also encounter EndpointSlices, the newer, more scalable version of Endpoints.
The Phone Book Analogy
If a Service is a phone number (stable), Endpoints are the phone book entries that map that number to actual people (pods). When you call the number, the phone system looks up who’s available in the book and connects you. Kubernetes does the same—the Service IP is looked up in Endpoints to find available pod IPs.
What You’ll Learn
Section titled “What You’ll Learn”By the end of this module, you’ll be able to:
- Understand how Endpoints connect Services to Pods
- Debug services by inspecting endpoints
- Create manual endpoints for external services
- Understand EndpointSlices and their benefits
- Handle headless services and their unique endpoint behavior
Did You Know?
Section titled “Did You Know?”-
Endpoints can be huge: In clusters with thousands of pods, a single Endpoints object can contain thousands of IP addresses. This caused performance issues, leading to EndpointSlices.
-
EndpointSlices are the future: Since Kubernetes 1.21, EndpointSlices are the default. Each slice holds up to 100 endpoints, making updates much more efficient.
-
Controllers watch endpoints: Many controllers (like Ingress controllers) watch Endpoints to know where to route traffic. When endpoints change, routing tables update automatically.
Part 1: Endpoints Fundamentals
Section titled “Part 1: Endpoints Fundamentals”1.1 What Are Endpoints?
Section titled “1.1 What Are Endpoints?”Endpoints are the glue between Services and Pods:
┌────────────────────────────────────────────────────────────────┐│ Service → Endpoints → Pods ││ ││ ┌──────────────────┐ ┌──────────────────┐ ││ │ Service │ │ Endpoints │ ││ │ web-svc │ │ web-svc │ ││ │ │ │ │ ││ │ selector: │───►│ subsets: │ ││ │ app: web │ │ - addresses: │ ││ │ │ │ - 10.244.1.5 │───► Pod 1 ││ │ ports: │ │ - 10.244.2.8 │───► Pod 2 ││ │ - port: 80 │ │ - 10.244.1.12 │───► Pod 3 ││ │ │ │ ports: │ ││ │ │ │ - port: 8080 │ ││ └──────────────────┘ └──────────────────┘ ││ ││ Endpoints auto-created and updated by endpoint controller ││ │└────────────────────────────────────────────────────────────────┘1.2 Endpoint Lifecycle
Section titled “1.2 Endpoint Lifecycle”┌────────────────────────────────────────────────────────────────┐│ Endpoint Controller ││ ││ Watches: Pods and Services ││ Updates: Endpoints objects ││ ││ Pod Created (label: app=web) ││ │ ││ ▼ ││ Controller finds Service with selector app=web ││ │ ││ ▼ ││ Adds Pod IP to Service's Endpoints ││ ││ Pod Deleted or Fails Readiness ││ │ ││ ▼ ││ Removes Pod IP from Endpoints ││ │└────────────────────────────────────────────────────────────────┘1.3 Viewing Endpoints
Section titled “1.3 Viewing Endpoints”# List all endpointsk get endpointsk get ep # Short form
# Get specific endpointk get endpoints web-svc
# Detailed viewk describe endpoints web-svc
# Get endpoints as YAMLk get endpoints web-svc -o yaml
# Wide output with pod IPsk get endpoints -o wide1.4 Endpoint Structure
Section titled “1.4 Endpoint Structure”# What an Endpoints object looks likeapiVersion: v1kind: Endpointsmetadata: name: web-svc # Must match Service name namespace: defaultsubsets:- addresses: # Ready pod IPs - ip: 10.244.1.5 nodeName: worker-1 targetRef: kind: Pod name: web-abc123 namespace: default - ip: 10.244.2.8 nodeName: worker-2 targetRef: kind: Pod name: web-def456 namespace: default notReadyAddresses: # Pods not passing readiness probe - ip: 10.244.1.12 nodeName: worker-1 targetRef: kind: Pod name: web-ghi789 namespace: default ports: - port: 8080 protocol: TCPPause and predict: You have a Service with 3 endpoints. You add a readiness probe to the deployment that checks
/healthz, but the endpoint on your app returns 500 for that path. What happens to the Service’s endpoints, and can clients still reach the app?
Part 2: Debugging with Endpoints
Section titled “Part 2: Debugging with Endpoints”2.1 No Endpoints = No Traffic
Section titled “2.1 No Endpoints = No Traffic”# Service exists but has no endpointsk get svc web-svc# NAME TYPE CLUSTER-IP PORT(S)# web-svc ClusterIP 10.96.45.123 80/TCP
k get endpoints web-svc# NAME ENDPOINTS AGE# web-svc <none> 5m ← Problem!2.2 Common Causes of Missing Endpoints
Section titled “2.2 Common Causes of Missing Endpoints”| Symptom | Cause | Debug Command | Solution |
|---|---|---|---|
<none> endpoints | No pods match selector | k get pods --show-labels | Fix selector or pod labels |
<none> endpoints | Pods not running | k get pods | Fix pod issues |
<none> endpoints | Pods in wrong namespace | k get pods -A | Check namespace |
| Partial endpoints | Some pods not ready | k describe endpoints | Check readiness probes |
2.3 Debugging Workflow
Section titled “2.3 Debugging Workflow”# Step 1: Check if endpoints existk get endpoints web-svc# If <none>, proceed to step 2
# Step 2: Check service selectork get svc web-svc -o yaml | grep -A5 selector# selector:# app: web
# Step 3: Find pods with matching labelsk get pods --selector=app=web# Should list pods backing the service
# Step 4: If no pods found, check what labels pods havek get pods --show-labels# Compare with service selector
# Step 5: If pods exist but not in endpoints, check pod statusk get pods# Look for pods that aren't Running
# Step 6: Check for readiness probe failuresk describe pod <pod-name> | grep -A10 Readiness2.4 Endpoints with NotReady Pods
Section titled “2.4 Endpoints with NotReady Pods”# Describe shows both ready and not-ready addressesk describe endpoints web-svc
# Output:# Name: web-svc# Subsets:# Addresses: 10.244.1.5,10.244.2.8# NotReadyAddresses: 10.244.1.12# Ports:# Name Port Protocol# ---- ---- --------# <unset> 8080 TCPPods in NotReadyAddresses are not receiving traffic.
Part 3: Manual Endpoints
Section titled “Part 3: Manual Endpoints”3.1 When to Use Manual Endpoints
Section titled “3.1 When to Use Manual Endpoints”Create endpoints manually when pointing to:
- External databases outside Kubernetes
- Services in other clusters
- IP-based resources that aren’t pods
3.2 Creating Manual Endpoints
Section titled “3.2 Creating Manual Endpoints”# Step 1: Create service WITHOUT selectorapiVersion: v1kind: Servicemetadata: name: external-dbspec: ports: - port: 5432 targetPort: 5432 # No selector! This is intentional.---# Step 2: Create Endpoints with same nameapiVersion: v1kind: Endpointsmetadata: name: external-db # Must match service name exactlysubsets:- addresses: - ip: 192.168.1.100 # External database IP - ip: 192.168.1.101 # Backup database IP ports: - port: 5432# Apply bothk apply -f external-db.yaml
# Verifyk get svc,endpoints external-db
# Now pods can reach external DB via:# external-db.default.svc.cluster.local:54323.3 Manual Endpoints Use Cases
Section titled “3.3 Manual Endpoints Use Cases”# Example: External API endpointapiVersion: v1kind: Servicemetadata: name: external-apispec: ports: - port: 443---apiVersion: v1kind: Endpointsmetadata: name: external-apisubsets:- addresses: - ip: 52.84.123.45 # External API server ports: - port: 443Stop and think: Imagine a Service backed by 5,000 pods. Every time a single pod is added or removed, the entire Endpoints object (containing all 5,000 IPs) must be sent to every node watching it. What problem does this create, and how would you design a better solution?
Part 4: EndpointSlices
Section titled “Part 4: EndpointSlices”4.1 Why EndpointSlices?
Section titled “4.1 Why EndpointSlices?”┌────────────────────────────────────────────────────────────────┐│ Endpoints Problem ││ ││ Large Service with 5000 pods ││ ││ Single Endpoints object: ││ - Contains all 5000 IPs ││ - Any pod change = entire object update ││ - Large payload sent to all watchers ││ - API server and etcd strain ││ │└────────────────────────────────────────────────────────────────┘
┌────────────────────────────────────────────────────────────────┐│ EndpointSlices Solution ││ ││ Same 5000 pods split across 50 slices ││ ││ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ││ │ Slice 1 │ │ Slice 2 │ │ Slice 3 │ ... │Slice 50 │ ││ │ 100 IPs │ │ 100 IPs │ │ 100 IPs │ │ 100 IPs │ ││ └─────────┘ └─────────┘ └─────────┘ └─────────┘ ││ ││ Pod change = update only affected slice ││ Small payload, minimal API server load ││ │└────────────────────────────────────────────────────────────────┘4.2 EndpointSlice Structure
Section titled “4.2 EndpointSlice Structure”# What an EndpointSlice looks likeapiVersion: discovery.k8s.io/v1kind: EndpointSlicemetadata: name: web-svc-abc12 # Auto-generated name labels: kubernetes.io/service-name: web-svcaddressType: IPv4ports:- name: "" port: 8080 protocol: TCPendpoints:- addresses: - 10.244.1.5 conditions: ready: true serving: true terminating: false nodeName: worker-1 targetRef: kind: Pod name: web-abc123 namespace: default- addresses: - 10.244.2.8 conditions: ready: true nodeName: worker-24.3 Viewing EndpointSlices
Section titled “4.3 Viewing EndpointSlices”# List all EndpointSlicesk get endpointslicesk get eps # Short form (might conflict with endpoints)
# Get EndpointSlices for a servicek get endpointslices -l kubernetes.io/service-name=web-svc
# Detailed viewk describe endpointslice web-svc-abc12
# Get as YAMLk get endpointslice web-svc-abc12 -o yaml4.4 Endpoints vs EndpointSlices Comparison
Section titled “4.4 Endpoints vs EndpointSlices Comparison”| Aspect | Endpoints | EndpointSlices |
|---|---|---|
| Max entries | Unlimited (but problematic) | 100 per slice |
| Update scope | Entire object | Single slice |
| API version | v1 | discovery.k8s.io/v1 |
| Default since | Always | Kubernetes 1.21 |
| Dual-stack support | Limited | Full IPv4/IPv6 |
| Topology hints | No | Yes |
What would happen if: You set
clusterIP: Noneon a Service. A pod doesnslookupon that service name. Instead of getting one IP, it gets three. Why is this useful, and when would you NOT want this behavior?
Part 5: Headless Services
Section titled “Part 5: Headless Services”5.1 What Is a Headless Service?
Section titled “5.1 What Is a Headless Service?”A headless service has no ClusterIP. DNS returns pod IPs directly.
apiVersion: v1kind: Servicemetadata: name: headless-svcspec: clusterIP: None # This makes it headless selector: app: web ports: - port: 805.2 Headless Service Behavior
Section titled “5.2 Headless Service Behavior”┌────────────────────────────────────────────────────────────────┐│ Regular vs Headless Service ││ ││ Regular Service (ClusterIP: 10.96.45.123) ││ ┌─────────────────────────────────────────────────────────┐ ││ │ DNS: web-svc.default.svc → 10.96.45.123 (Service IP) │ ││ │ Client → Service IP → kube-proxy → random Pod │ ││ └─────────────────────────────────────────────────────────┘ ││ ││ Headless Service (clusterIP: None) ││ ┌─────────────────────────────────────────────────────────┐ ││ │ DNS: web-svc.default.svc → │ ││ │ 10.244.1.5 (Pod 1) │ ││ │ 10.244.2.8 (Pod 2) │ ││ │ 10.244.1.12 (Pod 3) │ ││ │ Client gets ALL pod IPs, chooses one itself │ ││ └─────────────────────────────────────────────────────────┘ ││ │└────────────────────────────────────────────────────────────────┘5.3 Headless Service Use Cases
Section titled “5.3 Headless Service Use Cases”| Use Case | Why Headless? |
|---|---|
| StatefulSets | Need to address specific pods (pod-0, pod-1) |
| Client-side load balancing | Client needs all IPs to implement custom balancing |
| Service discovery | Discover all backend instances |
| Database clusters | Need direct connection to specific node |
5.4 Headless Service Endpoints
Section titled “5.4 Headless Service Endpoints”# Endpoints still track podsk get endpoints headless-svc# NAME ENDPOINTS# headless-svc 10.244.1.5,10.244.2.8,10.244.1.12
# DNS returns multiple A recordsk run test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup headless-svc
# Output:# Name: headless-svc.default.svc.cluster.local# Address: 10.244.1.5# Address: 10.244.2.8# Address: 10.244.1.12Part 6: Service Topology and Topology Hints
Section titled “Part 6: Service Topology and Topology Hints”6.1 Topology-Aware Routing
Section titled “6.1 Topology-Aware Routing”EndpointSlices support topology hints for zone-aware routing:
# EndpointSlice with hintsapiVersion: discovery.k8s.io/v1kind: EndpointSlicemetadata: name: web-svc-abc12endpoints:- addresses: - 10.244.1.5 zone: us-east-1a # Pod is in this zone hints: forZones: - name: us-east-1a # Prefer traffic from same zone6.2 Enabling Topology-Aware Hints
Section titled “6.2 Enabling Topology-Aware Hints”# Service with topology hintsapiVersion: v1kind: Servicemetadata: name: web-svc annotations: service.kubernetes.io/topology-mode: Auto # Enable hintsspec: selector: app: web ports: - port: 80Common Mistakes
Section titled “Common Mistakes”| Mistake | Problem | Solution |
|---|---|---|
| Wrong endpoint name | Endpoints not associated with service | Name must exactly match service name |
| Selector typo | No endpoints | Double-check label selectors |
| Missing targetRef | Can’t trace endpoint to pod | Include targetRef in manual endpoints |
| Ignoring NotReadyAddresses | Think pods are healthy | Check describe output for not-ready |
| Confusing endpoints/endpointslices | Get wrong data | Use both commands for debugging |
-
You deploy a new version of your app and
kubectl get endpoints my-svcsuddenly shows<none>, even thoughkubectl get podsshows 3 pods in Running state. The previous version worked fine. What is your debugging process?Answer
First, check if the pods are actually READY (not just Running) with `k get pods` -- look at the READY column. If they show `0/1`, the new version likely has a failing readiness probe. Second, verify the pod labels still match the Service selector: the new deployment might have changed labels. Run `k get svc my-svc -o yaml | grep -A5 selector` and compare with `k get pods --show-labels`. A common cause during version updates is changing the label (e.g., adding `version: v2`) while the Service selector still expects the old labels. -
Your company has a PostgreSQL database running on a VM at 192.168.1.50, outside the Kubernetes cluster. You want pods to reach it as
external-db.default.svc.cluster.local:5432. How do you set this up, and what happens if the database IP changes?Answer
Create a Service without a selector and a matching Endpoints object. The Service defines `port: 5432` with no selector, and the Endpoints object (with the exact same name `external-db`) lists `192.168.1.50` in its addresses. If the database IP changes, you must manually update the Endpoints object -- there is no automatic tracking since there is no selector. For a more maintainable approach, consider using an ExternalName Service that points to a DNS name instead of a raw IP. -
You run
kubectl describe endpoints my-svcand see 2 IPs underAddressesand 1 IP underNotReadyAddresses. A colleague says “just delete the not-ready pod to fix it.” Is this the right approach? What should you investigate first?Answer
Deleting the pod is a band-aid, not a fix. The pod is in NotReadyAddresses because its readiness probe is failing, which means the pod is alive but not healthy enough to serve traffic. First investigate WHY the readiness probe fails: check `k describe pod` for probe failure events, check the pod logs for errors, and verify the readiness endpoint is correct. The pod might be overloaded, have a configuration error, or be waiting for a dependency. Kubernetes is correctly protecting users from receiving traffic on an unhealthy pod -- that is exactly what readiness probes are for. -
Your cluster has a Service backed by 3,000 pods. An SRE reports that every time a rolling update occurs, the API server’s memory spikes and kube-proxy takes 10+ seconds to update rules. What is causing this, and what Kubernetes feature addresses it?
Answer
With 3,000 pods, the single Endpoints object is enormous. Every pod change during a rolling update requires the entire object to be rewritten and sent to every node's kube-proxy. EndpointSlices solve this by splitting endpoints into chunks of 100 (so ~30 slices). When a pod changes, only the affected slice (~100 entries) is updated and transmitted, reducing API server load and kube-proxy processing time by roughly 30x. EndpointSlices have been the default since Kubernetes 1.21. -
You are deploying a StatefulSet for a Kafka cluster where each broker needs to be individually addressable. A regular ClusterIP Service gives you a single virtual IP. How do you configure DNS so that producers can connect to
kafka-0,kafka-1, andkafka-2individually?Answer
Create a headless Service (with `clusterIP: None`) and set it as the StatefulSet's `serviceName`. With a headless Service, DNS returns individual A records for each pod rather than a single ClusterIP. Each StatefulSet pod gets a stable DNS name in the format `. . .svc.cluster.local`, so `kafka-0.kafka-headless.default.svc.cluster.local` always resolves to the specific pod. This is essential for stateful workloads where clients need to connect to specific instances, unlike stateless apps where any backend works.
Hands-On Exercise
Section titled “Hands-On Exercise”Task: Debug a service with endpoint issues.
Steps:
- Create a deployment:
k create deployment web --image=nginx --replicas=3- Create a service with wrong selector:
cat << 'EOF' | k apply -f -apiVersion: v1kind: Servicemetadata: name: broken-servicespec: selector: app: webapp # Wrong! Should be "web" ports: - port: 80EOF- Observe the problem:
k get endpoints broken-service# Shows: <none>- Debug the issue:
# Check what selector the service hask get svc broken-service -o yaml | grep -A2 selector
# Check what labels the pods havek get pods --show-labels
# Find the mismatch!- Fix the service:
k delete svc broken-servicek expose deployment web --port=80 --name=broken-service- Verify endpoints exist:
k get endpoints broken-service# Should show 3 pod IPs- Check EndpointSlices too:
k get endpointslices -l kubernetes.io/service-name=broken-service- Test with a headless service:
cat << 'EOF' | k apply -f -apiVersion: v1kind: Servicemetadata: name: headless-webspec: clusterIP: None selector: app: web ports: - port: 80EOF
# Check DNS returns multiple IPsk run test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup headless-web- Cleanup:
k delete deployment webk delete svc broken-service headless-webSuccess Criteria:
- Can identify missing endpoints
- Can debug selector mismatches
- Understand endpoints vs endpointslices
- Can create headless services
- Understand DNS behavior differences
Practice Drills
Section titled “Practice Drills”Drill 1: Endpoint Inspection (Target: 2 minutes)
Section titled “Drill 1: Endpoint Inspection (Target: 2 minutes)”# Setupk create deployment drill --image=nginx --replicas=2k expose deployment drill --port=80
# Check endpointsk get endpoints drill
# Get detailed endpoint infok describe endpoints drill
# Get as YAML (see pod IPs)k get endpoints drill -o yaml
# Check EndpointSlicesk get endpointslices -l kubernetes.io/service-name=drill
# Cleanupk delete deployment drillk delete svc drillDrill 2: Debug Missing Endpoints (Target: 3 minutes)
Section titled “Drill 2: Debug Missing Endpoints (Target: 3 minutes)”# Create deploymentk create deployment debug-app --image=nginx
# Create service with typo in selectorcat << 'EOF' | k apply -f -apiVersion: v1kind: Servicemetadata: name: debug-svcspec: selector: app: debug-apps # Typo: extra 's' ports: - port: 80EOF
# Observe problemk get endpoints debug-svc# <none>
# Debugk get pods --show-labelsk get svc debug-svc -o jsonpath='{.spec.selector}'
# Fixk delete svc debug-svck expose deployment debug-app --port=80 --name=debug-svc
# Verifyk get endpoints debug-svc
# Cleanupk delete deployment debug-appk delete svc debug-svcDrill 3: Manual Endpoints (Target: 4 minutes)
Section titled “Drill 3: Manual Endpoints (Target: 4 minutes)”# Create service without selectorcat << 'EOF' | k apply -f -apiVersion: v1kind: Servicemetadata: name: external-svcspec: ports: - port: 80EOF
# Check - no endpoints yetk get endpoints external-svc
# Create manual endpointscat << 'EOF' | k apply -f -apiVersion: v1kind: Endpointsmetadata: name: external-svcsubsets:- addresses: - ip: 1.2.3.4 - ip: 5.6.7.8 ports: - port: 80EOF
# Verify endpointsk get endpoints external-svck describe endpoints external-svc
# Cleanupk delete svc external-svck delete endpoints external-svcDrill 4: Headless Service (Target: 3 minutes)
Section titled “Drill 4: Headless Service (Target: 3 minutes)”# Create deploymentk create deployment headless-test --image=nginx --replicas=3
# Create headless servicecat << 'EOF' | k apply -f -apiVersion: v1kind: Servicemetadata: name: headlessspec: clusterIP: None selector: app: headless-test ports: - port: 80EOF
# Verify no ClusterIPk get svc headless# CLUSTER-IP should be "None"
# Check endpoints (still exist!)k get endpoints headless
# Test DNS - should return multiple IPsk run test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup headless
# Cleanupk delete deployment headless-testk delete svc headlessDrill 5: EndpointSlice Analysis (Target: 3 minutes)
Section titled “Drill 5: EndpointSlice Analysis (Target: 3 minutes)”# Create deploymentk create deployment slice-test --image=nginx --replicas=3k expose deployment slice-test --port=80
# Get EndpointSlice namek get endpointslices -l kubernetes.io/service-name=slice-test
# Describe itSLICE_NAME=$(k get endpointslices -l kubernetes.io/service-name=slice-test -o jsonpath='{.items[0].metadata.name}')k describe endpointslice $SLICE_NAME
# Get YAMLk get endpointslice $SLICE_NAME -o yaml
# Note the endpoints array with conditions
# Cleanupk delete deployment slice-testk delete svc slice-testDrill 6: Readiness and Endpoints (Target: 4 minutes)
Section titled “Drill 6: Readiness and Endpoints (Target: 4 minutes)”# Create pod with failing readiness probecat << 'EOF' | k apply -f -apiVersion: v1kind: Podmetadata: name: unready-pod labels: app: unreadyspec: containers: - name: nginx image: nginx readinessProbe: httpGet: path: /nonexistent port: 80 initialDelaySeconds: 1 periodSeconds: 2EOF
# Create servicek expose pod unready-pod --port=80 --name=unready-svc
# Wait a moment, then check endpointssleep 10k get endpoints unready-svc# Should be <none> or empty!
# Check whyk describe endpoints unready-svc# Look for notReadyAddresses
# Check pod statusk get pod unready-pod# Not ready due to probe
# Cleanupk delete pod unready-podk delete svc unready-svcDrill 7: Scale and Watch Endpoints (Target: 3 minutes)
Section titled “Drill 7: Scale and Watch Endpoints (Target: 3 minutes)”# Create deploymentk create deployment watch-test --image=nginx --replicas=1k expose deployment watch-test --port=80
# Watch endpoints in terminal 1 (or background)k get endpoints watch-test -w &
# Scale up and observe endpoints changek scale deployment watch-test --replicas=5sleep 5
# Scale downk scale deployment watch-test --replicas=2sleep 5
# Bring watch to foreground and stopfg# Ctrl+C
# Cleanupk delete deployment watch-testk delete svc watch-testDrill 8: Challenge - Complete Endpoint Workflow
Section titled “Drill 8: Challenge - Complete Endpoint Workflow”Without looking at solutions:
- Create deployment
ep-challengewith 3 replicas of nginx - Create a service that intentionally has wrong selector
- Diagnose why endpoints are empty
- Fix the service
- Create a headless service for same deployment
- Verify DNS returns 3 IPs for headless service
- Create manual endpoints for IP 10.0.0.1
- Cleanup everything
# YOUR TASK: Complete in under 6 minutesSolution
# 1. Create deploymentk create deployment ep-challenge --image=nginx --replicas=3
# 2. Create service with wrong selectorcat << 'EOF' | k apply -f -apiVersion: v1kind: Servicemetadata: name: wrong-svcspec: selector: app: wrong ports: - port: 80EOF
# 3. Diagnosek get endpoints wrong-svc# <none>k get pods --show-labels# Labels show app=ep-challenge, not app=wrong
# 4. Fixk delete svc wrong-svck expose deployment ep-challenge --port=80 --name=fixed-svck get endpoints fixed-svc# Shows 3 IPs
# 5. Create headless servicecat << 'EOF' | k apply -f -apiVersion: v1kind: Servicemetadata: name: headless-challengespec: clusterIP: None selector: app: ep-challenge ports: - port: 80EOF
# 6. Verify DNSk run test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup headless-challenge# Should show 3 IPs
# 7. Manual endpointscat << 'EOF' | k apply -f -apiVersion: v1kind: Servicemetadata: name: manual-svcspec: ports: - port: 80---apiVersion: v1kind: Endpointsmetadata: name: manual-svcsubsets:- addresses: - ip: 10.0.0.1 ports: - port: 80EOFk get endpoints manual-svc
# 8. Cleanupk delete deployment ep-challengek delete svc fixed-svc headless-challenge manual-svck delete endpoints manual-svcNext Module
Section titled “Next Module”Module 3.3: DNS & CoreDNS - Deep-dive into Kubernetes DNS and service discovery.