Skip to content

Module 3.1: Services Deep-Dive

Hands-On Lab Available
K8s Cluster intermediate 40 min
Launch Lab ↗

Opens in Killercoda in a new tab

Complexity: [MEDIUM] - Core networking concept

Time to Complete: 45-55 minutes

Prerequisites: Module 2.1 (Pods), Module 2.2 (Deployments)


After this module, you will be able to:

  • Create ClusterIP, NodePort, and LoadBalancer services and explain the traffic flow for each
  • Debug service connectivity by checking endpoints, selectors, and kube-proxy rules
  • Trace a request from client through Service to Pod using iptables/IPVS rules
  • Explain how kube-proxy implements service load balancing in iptables and IPVS modes

Pods are ephemeral—they come and go, and their IP addresses change. Services provide stable networking for your applications. Without services, you’d have to track every pod IP manually, which is impossible at scale.

The CKA exam heavily tests services. You’ll need to create services quickly, expose deployments, debug service connectivity, and understand when to use each service type.

The Restaurant Analogy

Imagine a restaurant (your application). Pods are individual chefs—they might change shifts, get sick, or be replaced. The restaurant’s phone number (Service) stays the same regardless of which chefs are working. Customers (clients) call the same number, and the call gets routed to an available chef. That’s exactly what Services do in Kubernetes.


By the end of this module, you’ll be able to:

  • Understand the four service types and when to use each
  • Create services imperatively and declaratively
  • Expose deployments and pods
  • Debug service connectivity issues
  • Use selectors to target the right pods

  • Services predate Pods: The concept of stable service IPs was designed before pods existed in Kubernetes. The founders knew ephemeral pods needed stable endpoints.

  • Virtual IPs are magic: ClusterIP addresses don’t exist on any network interface. They’re “virtual” IPs that kube-proxy intercepts and routes using iptables or nftables rules. (Note: IPVS mode was deprecated in K8s 1.35 — nftables is the recommended replacement.)

  • NodePort range is configurable: The default 30000-32767 range can be changed with the --service-node-port-range flag on the API server, though most clusters stick with defaults.


┌────────────────────────────────────────────────────────────────┐
│ The Problem │
│ │
│ Client wants to reach "web app" │
│ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Pod: web-abc123 IP: 10.244.1.5 ← Created │ │
│ │ Pod: web-def456 IP: 10.244.2.8 ← Running │ │
│ │ Pod: web-ghi789 IP: 10.244.1.12 ← Created │ │
│ │ Pod: web-abc123 IP: 10.244.1.5 ← Deleted! │ │
│ │ Pod: web-xyz999 IP: 10.244.3.2 ← New pod │ │
│ └─────────────────────────────────────────────────────┘ │
│ │
│ Which IP should the client use? They keep changing! │
│ │
└────────────────────────────────────────────────────────────────┘
┌────────────────────────────────────────────────────────────────┐
│ The Solution: Services │
│ │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ Service: web-service │ │
│ │ ClusterIP: 10.96.45.123 │ │
│ │ (Never changes!) │ │
│ │ │ │
│ │ Selector: app=web │ │
│ │ │ │ │
│ │ ├──► Pod: web-def456 (10.244.2.8) │ │
│ │ ├──► Pod: web-ghi789 (10.244.1.12) │ │
│ │ └──► Pod: web-xyz999 (10.244.3.2) │ │
│ └───────────────────────────────────────────────────────┘ │
│ │
│ Client always uses 10.96.45.123 - Kubernetes handles rest │
│ │
└────────────────────────────────────────────────────────────────┘
ComponentDescription
ClusterIPStable internal IP address for the service
SelectorLabels that identify which pods to route to
PortThe port the service listens on
TargetPortThe port on the pods to forward traffic to
EndpointsActual pod IPs backing the service
┌────────────────────────────────────────────────────────────────┐
│ Service Request Flow │
│ │
│ 1. Client sends request to Service IP (10.96.45.123:80) │
│ │ │
│ ▼ │
│ 2. kube-proxy (on each node) intercepts │
│ │ │
│ ▼ │
│ 3. kube-proxy uses iptables/nftables rules │
│ │ │
│ ▼ │
│ 4. Request forwarded to one of the pod IPs │
│ (load balanced - round robin by default) │
│ │ │
│ ▼ │
│ 5. Pod receives request on targetPort │
│ │
└────────────────────────────────────────────────────────────────┘

Pause and predict: You have a frontend deployment and a backend deployment. The frontend needs to call the backend, and external users need to reach the frontend. What service type would you choose for each, and why?

TypeScopeUse CaseExam Frequency
ClusterIPInternal onlyPod-to-pod communication⭐⭐⭐⭐⭐
NodePortExternal via node IPDevelopment, testing⭐⭐⭐⭐
LoadBalancerExternal via cloud LBProduction in cloud⭐⭐⭐
ExternalNameDNS aliasExternal services⭐⭐
# Internal-only access - most common type
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: ClusterIP # Default, can be omitted
selector:
app: web # Match pods with label app=web
ports:
- port: 80 # Service listens on port 80
targetPort: 8080 # Forward to pod port 8080
┌────────────────────────────────────────────────────────────────┐
│ ClusterIP Service │
│ │
│ Only accessible from within the cluster │
│ │
│ ┌────────────────┐ ┌────────────────┐ │
│ │ Other Pod │───────►│ ClusterIP │ │
│ │ (client) │ │ 10.96.45.123 │ │
│ └────────────────┘ │ │ │
│ │ ┌──────────┐ │ │
│ │ │ Pod │ │ │
│ │ │ app=web │ │ │
│ ┌────────────────┐ │ └──────────┘ │ │
│ │ External │───X───►│ │ │
│ │ (blocked) │ │ ┌──────────┐ │ │
│ └────────────────┘ │ │ Pod │ │ │
│ │ │ app=web │ │ │
│ │ └──────────┘ │ │
│ └────────────────┘ │
│ │
└────────────────────────────────────────────────────────────────┘
# Exposes service on each node's IP at a static port
apiVersion: v1
kind: Service
metadata:
name: web-nodeport
spec:
type: NodePort
selector:
app: web
ports:
- port: 80 # ClusterIP port (internal)
targetPort: 8080 # Pod port
nodePort: 30080 # External port (30000-32767)
┌────────────────────────────────────────────────────────────────┐
│ NodePort Service │
│ │
│ External access via <NodeIP>:<NodePort> │
│ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Cluster │ │
│ │ │ │
│ │ Node 1 (192.168.1.10) Node 2 (192.168.1.11) │ │
│ │ ┌──────────────────┐ ┌──────────────────┐ │ │
│ │ │ :30080 ──────────┼──────┼─► Pod (app=web) │ │ │
│ │ └──────────────────┘ └──────────────────┘ │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────┘ │
│ ▲ ▲ │
│ │ │ │
│ External: 192.168.1.10:30080 OR 192.168.1.11:30080 │
│ (Both work!) │
│ │
└────────────────────────────────────────────────────────────────┘
# Creates external load balancer (cloud provider)
apiVersion: v1
kind: Service
metadata:
name: web-lb
spec:
type: LoadBalancer
selector:
app: web
ports:
- port: 80
targetPort: 8080
┌────────────────────────────────────────────────────────────────┐
│ LoadBalancer Service │
│ │
│ Cloud provider creates an external load balancer │
│ │
│ ┌──────────────────┐ │
│ │ Internet │ │
│ └────────┬─────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────┐ External IP: 34.85.123.45 │
│ │ Cloud LB │ │
│ │ (AWS/GCP/Azure)│ │
│ └────────┬─────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ NodePort (auto-created) │ │
│ │ │ │ │
│ │ ┌─────────────┼─────────────┐ │ │
│ │ ▼ ▼ ▼ │ │
│ │ ┌──────┐ ┌──────┐ ┌──────┐ │ │
│ │ │ Pod │ │ Pod │ │ Pod │ │ │
│ │ └──────┘ └──────┘ └──────┘ │ │
│ └──────────────────────────────────────────────────┘ │
│ │
└────────────────────────────────────────────────────────────────┘
# DNS alias to external service (no proxying)
apiVersion: v1
kind: Service
metadata:
name: external-db
spec:
type: ExternalName
externalName: database.example.com # Returns CNAME record
# No selector - points to external DNS name
┌────────────────────────────────────────────────────────────────┐
│ ExternalName Service │
│ │
│ DNS alias - no ClusterIP, no proxying │
│ │
│ ┌────────────────┐ │
│ │ Pod │ │
│ │ │──► DNS: external-db.default.svc │
│ │ │ │ │
│ └────────────────┘ │ Returns CNAME │
│ ▼ │
│ database.example.com │
│ │ │
│ ▼ │
│ ┌──────────────────┐ │
│ │ External DB │ │
│ │ (outside K8s) │ │
│ └──────────────────┘ │
│ │
└────────────────────────────────────────────────────────────────┘

Terminal window
# Expose a deployment (most common exam task)
k expose deployment nginx --port=80 --target-port=8080 --name=nginx-svc
# Expose with NodePort
k expose deployment nginx --port=80 --type=NodePort --name=nginx-np
# Expose a pod
k expose pod nginx --port=80 --name=nginx-pod-svc
# Generate YAML without creating
k expose deployment nginx --port=80 --dry-run=client -o yaml > svc.yaml
# Create service for existing pods by selector
k create service clusterip my-svc --tcp=80:8080
Terminal window
# Full syntax
k expose deployment <name> \
--port=<service-port> \
--target-port=<pod-port> \
--type=<ClusterIP|NodePort|LoadBalancer> \
--name=<service-name> \
--protocol=<TCP|UDP>
# Examples
k expose deployment web --port=80 --target-port=8080
k expose deployment web --port=80 --type=NodePort
k expose deployment web --port=80 --type=LoadBalancer
# Complete service example
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
app: web
spec:
type: ClusterIP
selector:
app: web # MUST match pod labels
tier: frontend
ports:
- name: http # Named port (good practice)
port: 80 # Service port
targetPort: 8080 # Pod port (can be name or number)
protocol: TCP # TCP (default) or UDP
# Service with multiple ports
apiVersion: v1
kind: Service
metadata:
name: multi-port-svc
spec:
selector:
app: web
ports:
- name: http # Required when multiple ports
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8443
- name: metrics
port: 9090
targetPort: 9090

Every service gets a DNS entry:

  • <service-name> - within same namespace
  • <service-name>.<namespace> - cross-namespace
  • <service-name>.<namespace>.svc.cluster.local - fully qualified
Terminal window
# From a pod in the same namespace
curl web-service
# From a pod in different namespace
curl web-service.production
# Fully qualified (always works)
curl web-service.production.svc.cluster.local

Kubernetes injects service info into pods:

Terminal window
# Environment variables for service "web-service"
WEB_SERVICE_SERVICE_HOST=10.96.45.123
WEB_SERVICE_SERVICE_PORT=80
# Note: Only works for services created BEFORE the pod
Terminal window
# List services
k get services
k get svc # Short form
# Get service details
k describe svc web-service
# Get service endpoints
k get endpoints web-service
# Get service YAML
k get svc web-service -o yaml
# Find service ClusterIP
k get svc web-service -o jsonpath='{.spec.clusterIP}'

# Service selector MUST match pod labels exactly
# Service:
spec:
selector:
app: web
tier: frontend
# Pod (will be selected):
metadata:
labels:
app: web
tier: frontend
version: v2 # Extra labels OK
# Pod (will NOT be selected - missing tier):
metadata:
labels:
app: web
version: v2

Endpoints are automatically created when pods match the selector:

Terminal window
# View endpoints (pod IPs backing the service)
k get endpoints web-service
# NAME ENDPOINTS AGE
# web-service 10.244.1.5:8080,10.244.2.8:8080 5m
# Detailed endpoint info
k describe endpoints web-service

Create a service that points to manual endpoints:

# Service without selector
apiVersion: v1
kind: Service
metadata:
name: external-service
spec:
ports:
- port: 80
targetPort: 80
---
# Manual endpoints
apiVersion: v1
kind: Endpoints
metadata:
name: external-service # Must match service name
subsets:
- addresses:
- ip: 192.168.1.100 # External IP
- ip: 192.168.1.101
ports:
- port: 80

Use case: Pointing to external databases or services outside the cluster.


Stop and think: A developer tells you “my service isn’t working.” Before you touch the keyboard, what three things would you check first, and in what order? Think about the chain from Service to Endpoints to Pods.

Service Not Working?
├── kubectl get svc (check service exists)
│ │
│ └── Check TYPE, CLUSTER-IP, EXTERNAL-IP, PORT
├── kubectl get endpoints <svc> (check endpoints)
│ │
│ ├── No endpoints? → Selector doesn't match pods
│ │ Check pod labels
│ │
│ └── Endpoints exist? → Pods aren't responding
│ Check pod health
├── kubectl describe svc <svc> (check selector)
│ │
│ └── Verify selector matches pod labels
└── Test from inside cluster:
kubectl run test --rm -it --image=busybox -- wget -qO- <svc>
SymptomCauseSolution
No endpointsSelector doesn’t match podsFix selector or pod labels
Connection refusedPod not listening on targetPortCheck pod port configuration
TimeoutPod not running or crashloopingDebug pod issues first
NodePort not accessibleFirewall blocking portCheck node firewall rules
Wrong service typeUsing ClusterIP for external accessChange to NodePort/LoadBalancer
Terminal window
# Check service and endpoints
k get svc,endpoints
# Verify selector matches pods
k get pods --selector=app=web
# Test connectivity from within cluster
k run test --rm -it --image=busybox:1.36 --restart=Never -- \
wget -qO- http://web-service
# Test with curl
k run test --rm -it --image=curlimages/curl --restart=Never -- \
curl -s http://web-service
# Check DNS resolution
k run test --rm -it --image=busybox:1.36 --restart=Never -- \
nslookup web-service
# Check port on pod directly
k exec <pod> -- netstat -tlnp

6.4 Advanced Debugging: Tracing kube-proxy Rules

Section titled “6.4 Advanced Debugging: Tracing kube-proxy Rules”

While not strictly required for everyday administration, understanding how kube-proxy routes traffic is invaluable for advanced debugging. When a Service is created, kube-proxy configures netfilter rules (using iptables, ipvs, or nftables) on every node to intercept traffic to the virtual ClusterIP.

To practically trace a request from a client, through a Service, to a Pod using iptables-save:

Terminal window
# 1. Get the Service ClusterIP
kubectl get svc web-service
# Example IP: 10.96.45.123
# 2. SSH into a Kubernetes Node
ssh user@node-01
# 3. Search iptables rules for the Service IP
sudo iptables-save | grep 10.96.45.123
# You will see a rule redirecting traffic to a KUBE-SVC-* chain:
# -A KUBE-SERVICES -d 10.96.45.123/32 -p tcp -m tcp --dport 80 -j KUBE-SVC-XXXXXXXXXXXXXXXX
# 4. Inspect the KUBE-SVC chain to find the load balancing logic
sudo iptables-save | grep KUBE-SVC-XXXXXXXXXXXXXXXX
# You will see rules distributing traffic to KUBE-SEP-* chains (one for each Pod endpoint) using probabilities.
# 5. Inspect a KUBE-SEP (Service Endpoint) chain to find the actual Pod IP
sudo iptables-save | grep KUBE-SEP-YYYYYYYYYYYYYYYY
# You will see the DNAT rule translating the destination to the Pod IP:
# -A KUBE-SEP-YYYYYYYYYYYYYYYY -p tcp -m tcp -j DNAT --to-destination 10.244.1.5:8080

This demonstrates exactly how the “magic” of virtual IPs works under the hood. For clusters using nftables (the recommended replacement for IPVS in K8s 1.35+), you would use nft list ruleset | grep 10.96.45.123 to trace similar Network Address Translation (NAT) structures.

War Story: The Selector Mismatch

A developer spent hours debugging why their service had no endpoints. The deployment used app: web-app but the service selector was app: webapp (no hyphen). One character difference = zero connectivity. Always copy-paste selectors!


# Sticky sessions - route same client to same pod
apiVersion: v1
kind: Service
metadata:
name: sticky-service
spec:
selector:
app: web
sessionAffinity: ClientIP # None (default) or ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800 # 3 hours (default)
ports:
- port: 80
ScenarioUse Affinity?
Stateless APINo (default)
Shopping cart in pod memoryYes (but better: use Redis)
WebSocket connectionsYes
Authentication sessions in memoryYes (but better: external store)

What would happen if: You create a Service with sessionAffinity: ClientIP and then scale your deployment from 3 replicas to 1 replica. What happens to clients that were pinned to the deleted pods?

Kubernetes 1.35 graduated PreferSameNode traffic distribution to GA, giving you fine-grained control over where service traffic is routed:

apiVersion: v1
kind: Service
metadata:
name: latency-sensitive
spec:
selector:
app: cache
ports:
- port: 6379
trafficDistribution: PreferSameNode # Route to local node first
ValueBehavior
PreferSameNodeStrictly prefer endpoints on the same node, fall back to remote (GA in 1.35)
PreferClosePrefer endpoints topologically close — same zone when using topology-aware routing

This is particularly useful for latency-sensitive workloads like caches, sidecars, and node-local services.


MistakeProblemSolution
Selector mismatchService has no endpointsEnsure selector matches pod labels exactly
Port vs TargetPort confusionConnection refusedPort = service, TargetPort = pod
Missing service typeCan’t access externallySpecify NodePort or LoadBalancer
Using ClusterIP externallyConnection timeoutClusterIP is internal only
Forgetting namespaceService not foundUse FQDN for cross-namespace

  1. A developer has a Service with port: 80 and targetPort: 8080, but their app container listens on port 80. Users report “connection refused” when hitting the Service. What went wrong and how would you fix it?

    Answer The `targetPort` (8080) does not match the port the container is actually listening on (80). When kube-proxy forwards traffic to the pod, it sends it to port 8080, but nothing is listening there. The fix is to either change `targetPort` to 80 in the Service spec, or change the container to listen on 8080. The key distinction: `port` is what clients use to reach the Service, `targetPort` is where the pod actually receives the traffic.
  2. You deploy a new microservice and create a Service for it, but kubectl get endpoints shows <none>. The pods are running and show 1/1 READY. Walk through your debugging process.

    Answer Since pods are running and ready, the most likely cause is a selector mismatch. First, check the Service selector with `k get svc -o yaml | grep -A5 selector`. Then compare with pod labels using `k get pods --show-labels`. Even a single character difference (e.g., `app: web-app` vs `app: webapp`) will cause zero endpoints. Also check that the Service and pods are in the same namespace -- Services only select pods within their own namespace.
  3. A developer created a ClusterIP Service for their frontend app but external users can’t reach it. They ask you to fix it. What’s wrong, what are the options, and what trade-offs should you consider?

    Answer ClusterIP is internal-only and cannot be reached from outside the cluster. The options are: (1) Change to NodePort -- free, but uses high ports (30000-32767) and exposes on every node; (2) Change to LoadBalancer -- clean external IP, but costs money per LB in cloud environments; (3) Put an Ingress or Gateway in front -- single entry point for many services with path/host routing, but requires an Ingress controller. For production, Ingress/Gateway is usually the right choice because it consolidates external access through one load balancer.
  4. During a CKA exam, you need to expose a deployment called payment-api as a NodePort service on port 80, targeting container port 3000, with a specific NodePort of 30100. Write the command and explain what happens if you omit the --target-port flag.

    Answer The imperative approach requires YAML since `kubectl expose` cannot set a specific nodePort. Use: `k expose deployment payment-api --port=80 --target-port=3000 --type=NodePort --dry-run=client -o yaml > svc.yaml`, then edit the YAML to add `nodePort: 30100` and apply it. If you omit `--target-port`, it defaults to the same value as `--port` (80), so traffic would be forwarded to port 80 on the pod instead of 3000, resulting in connection refused if the app listens on 3000.
  5. Your team runs services in namespaces frontend, backend, and database. A pod in frontend needs to call service api in backend. It works with curl api.backend but fails with just curl api. Explain why and when you’d use the full FQDN instead.

    Answer The short name `api` only works within the same namespace because the search domain in `/etc/resolv.conf` appends the pod's own namespace first (`api.frontend.svc.cluster.local`), which does not exist. Using `api.backend` works because the search domain appends `.svc.cluster.local` to make `api.backend.svc.cluster.local`. You would use the full FQDN (`api.backend.svc.cluster.local`) in application configuration files for clarity and to avoid ambiguity, especially in production where misconfigured search domains could silently route to the wrong service.

Task: Create and debug services for a multi-tier application.

Steps:

  1. Create a backend deployment:
Terminal window
k create deployment backend --image=nginx --replicas=2
k set env deployment/backend APP=backend
  1. Label the pods properly:
Terminal window
k label deployment backend tier=backend
  1. Expose backend as ClusterIP:
Terminal window
k expose deployment backend --port=80 --name=backend-svc
  1. Verify the service:
Terminal window
k get svc backend-svc
k get endpoints backend-svc
  1. Create a frontend deployment:
Terminal window
k create deployment frontend --image=nginx --replicas=2
  1. Expose frontend as NodePort:
Terminal window
k expose deployment frontend --port=80 --type=NodePort --name=frontend-svc
  1. Test internal connectivity:
Terminal window
# From a test pod, reach the backend service
k run test --rm -it --image=busybox:1.36 --restart=Never -- \
wget -qO- http://backend-svc
  1. Test cross-namespace:
Terminal window
# Create another namespace and test
k create namespace other
k run test -n other --rm -it --image=busybox:1.36 --restart=Never -- \
wget -qO- http://backend-svc.default
  1. Debug a broken service:
Terminal window
# Create a service with wrong selector
k create service clusterip broken-svc --tcp=80:80
# Check endpoints (should be empty)
k get endpoints broken-svc
# Fix by creating proper service
k delete svc broken-svc
k expose deployment backend --port=80 --name=broken-svc --selector=app=backend
k get endpoints broken-svc
  1. Cleanup:
Terminal window
k delete deployment frontend backend
k delete svc backend-svc frontend-svc broken-svc
k delete namespace other

Success Criteria:

  • Can create ClusterIP and NodePort services
  • Understand port vs targetPort
  • Can debug services with no endpoints
  • Can access services across namespaces
  • Understand when to use each service type

Drill 1: Service Creation Speed (Target: 2 minutes)

Section titled “Drill 1: Service Creation Speed (Target: 2 minutes)”

Create services for a deployment as fast as possible:

Terminal window
# Setup
k create deployment drill-app --image=nginx --replicas=2
# Create ClusterIP service
k expose deployment drill-app --port=80 --name=drill-clusterip
# Create NodePort service
k expose deployment drill-app --port=80 --type=NodePort --name=drill-nodeport
# Verify both
k get svc drill-clusterip drill-nodeport
# Generate YAML
k expose deployment drill-app --port=80 --dry-run=client -o yaml > svc.yaml
# Cleanup
k delete deployment drill-app
k delete svc drill-clusterip drill-nodeport
rm svc.yaml

Drill 2: Multi-Port Service (Target: 3 minutes)

Section titled “Drill 2: Multi-Port Service (Target: 3 minutes)”
Terminal window
# Create deployment
k create deployment multi-port --image=nginx
# Create multi-port service from YAML
cat << 'EOF' | k apply -f -
apiVersion: v1
kind: Service
metadata:
name: multi-port-svc
spec:
selector:
app: multi-port
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
EOF
# Verify
k describe svc multi-port-svc
# Cleanup
k delete deployment multi-port
k delete svc multi-port-svc

Drill 3: Service Discovery (Target: 3 minutes)

Section titled “Drill 3: Service Discovery (Target: 3 minutes)”
Terminal window
# Create service
k create deployment web --image=nginx
k expose deployment web --port=80
# Test DNS resolution
k run dns-test --rm -it --image=busybox:1.36 --restart=Never -- \
nslookup web
# Test full FQDN
k run dns-test --rm -it --image=busybox:1.36 --restart=Never -- \
nslookup web.default.svc.cluster.local
# Test connectivity
k run curl-test --rm -it --image=curlimages/curl --restart=Never -- \
curl -s http://web
# Cleanup
k delete deployment web
k delete svc web

Drill 4: Endpoint Debugging (Target: 4 minutes)

Section titled “Drill 4: Endpoint Debugging (Target: 4 minutes)”
Terminal window
# Create deployment with specific labels
k create deployment endpoint-test --image=nginx
k label deployment endpoint-test tier=web --overwrite
# Create service with WRONG selector (intentionally broken)
cat << 'EOF' | k apply -f -
apiVersion: v1
kind: Service
metadata:
name: broken-endpoints
spec:
selector:
app: wrong-label # This won't match!
ports:
- port: 80
EOF
# Observe: no endpoints
k get endpoints broken-endpoints
# ENDPOINTS: <none>
# Debug: check what selector should be
k get pods --show-labels
# Fix: delete and recreate with correct selector
k delete svc broken-endpoints
k expose deployment endpoint-test --port=80 --name=fixed-endpoints
# Verify: endpoints exist now
k get endpoints fixed-endpoints
# Cleanup
k delete deployment endpoint-test
k delete svc fixed-endpoints

Drill 5: Cross-Namespace Access (Target: 3 minutes)

Section titled “Drill 5: Cross-Namespace Access (Target: 3 minutes)”
Terminal window
# Create service in default namespace
k create deployment app --image=nginx
k expose deployment app --port=80
# Create other namespace
k create namespace testing
# Access from other namespace - short form
k run test -n testing --rm -it --image=busybox:1.36 --restart=Never -- \
wget -qO- http://app.default
# Access with FQDN
k run test -n testing --rm -it --image=busybox:1.36 --restart=Never -- \
wget -qO- http://app.default.svc.cluster.local
# Cleanup
k delete deployment app
k delete svc app
k delete namespace testing

Drill 6: NodePort Specific Port (Target: 3 minutes)

Section titled “Drill 6: NodePort Specific Port (Target: 3 minutes)”
Terminal window
# Create deployment
k create deployment nodeport-test --image=nginx
# Create NodePort with specific port
cat << 'EOF' | k apply -f -
apiVersion: v1
kind: Service
metadata:
name: specific-nodeport
spec:
type: NodePort
selector:
app: nodeport-test
ports:
- port: 80
targetPort: 80
nodePort: 30080 # Specific port
EOF
# Verify port
k get svc specific-nodeport
# Should show 80:30080/TCP
# Cleanup
k delete deployment nodeport-test
k delete svc specific-nodeport

Drill 7: ExternalName Service (Target: 2 minutes)

Section titled “Drill 7: ExternalName Service (Target: 2 minutes)”
Terminal window
# Create ExternalName service
cat << 'EOF' | k apply -f -
apiVersion: v1
kind: Service
metadata:
name: external-api
spec:
type: ExternalName
externalName: api.example.com
EOF
# Check the service (no ClusterIP!)
k get svc external-api
# Note: CLUSTER-IP shows as <none>
# Test DNS resolution
k run test --rm -it --image=busybox:1.36 --restart=Never -- \
nslookup external-api
# Shows CNAME to api.example.com
# Cleanup
k delete svc external-api

Drill 8: Challenge - Complete Service Workflow

Section titled “Drill 8: Challenge - Complete Service Workflow”

Without looking at solutions:

  1. Create deployment challenge-app with nginx, 3 replicas
  2. Expose as ClusterIP service on port 80
  3. Verify endpoints show 3 pod IPs
  4. Scale deployment to 5 replicas
  5. Verify endpoints now show 5 pod IPs
  6. Change service to NodePort type
  7. Get the NodePort number
  8. Cleanup everything
Terminal window
# YOUR TASK: Complete in under 5 minutes
Solution
Terminal window
# 1. Create deployment
k create deployment challenge-app --image=nginx --replicas=3
# 2. Expose as ClusterIP
k expose deployment challenge-app --port=80
# 3. Verify 3 endpoints
k get endpoints challenge-app
# 4. Scale to 5
k scale deployment challenge-app --replicas=5
# 5. Verify 5 endpoints
k get endpoints challenge-app
# 6. Change to NodePort (delete and recreate)
k delete svc challenge-app
k expose deployment challenge-app --port=80 --type=NodePort
# 7. Get NodePort
k get svc challenge-app -o jsonpath='{.spec.ports[0].nodePort}'
# 8. Cleanup
k delete deployment challenge-app
k delete svc challenge-app

Module 3.2: Endpoints & EndpointSlices - Deep-dive into how services track pods.