Module 3.1: Services Deep-Dive
Complexity:
[MEDIUM]- Core networking conceptTime to Complete: 45-55 minutes
Prerequisites: Module 2.1 (Pods), Module 2.2 (Deployments)
What You’ll Be Able to Do
Section titled “What You’ll Be Able to Do”After this module, you will be able to:
- Create ClusterIP, NodePort, and LoadBalancer services and explain the traffic flow for each
- Debug service connectivity by checking endpoints, selectors, and kube-proxy rules
- Trace a request from client through Service to Pod using iptables/IPVS rules
- Explain how kube-proxy implements service load balancing in iptables and IPVS modes
Why This Module Matters
Section titled “Why This Module Matters”Pods are ephemeral—they come and go, and their IP addresses change. Services provide stable networking for your applications. Without services, you’d have to track every pod IP manually, which is impossible at scale.
The CKA exam heavily tests services. You’ll need to create services quickly, expose deployments, debug service connectivity, and understand when to use each service type.
The Restaurant Analogy
Imagine a restaurant (your application). Pods are individual chefs—they might change shifts, get sick, or be replaced. The restaurant’s phone number (Service) stays the same regardless of which chefs are working. Customers (clients) call the same number, and the call gets routed to an available chef. That’s exactly what Services do in Kubernetes.
What You’ll Learn
Section titled “What You’ll Learn”By the end of this module, you’ll be able to:
- Understand the four service types and when to use each
- Create services imperatively and declaratively
- Expose deployments and pods
- Debug service connectivity issues
- Use selectors to target the right pods
Did You Know?
Section titled “Did You Know?”-
Services predate Pods: The concept of stable service IPs was designed before pods existed in Kubernetes. The founders knew ephemeral pods needed stable endpoints.
-
Virtual IPs are magic: ClusterIP addresses don’t exist on any network interface. They’re “virtual” IPs that kube-proxy intercepts and routes using iptables or nftables rules. (Note: IPVS mode was deprecated in K8s 1.35 — nftables is the recommended replacement.)
-
NodePort range is configurable: The default 30000-32767 range can be changed with the
--service-node-port-rangeflag on the API server, though most clusters stick with defaults.
Part 1: Service Fundamentals
Section titled “Part 1: Service Fundamentals”1.1 Why Services?
Section titled “1.1 Why Services?”┌────────────────────────────────────────────────────────────────┐│ The Problem ││ ││ Client wants to reach "web app" ││ ││ ┌─────────────────────────────────────────────────────┐ ││ │ Pod: web-abc123 IP: 10.244.1.5 ← Created │ ││ │ Pod: web-def456 IP: 10.244.2.8 ← Running │ ││ │ Pod: web-ghi789 IP: 10.244.1.12 ← Created │ ││ │ Pod: web-abc123 IP: 10.244.1.5 ← Deleted! │ ││ │ Pod: web-xyz999 IP: 10.244.3.2 ← New pod │ ││ └─────────────────────────────────────────────────────┘ ││ ││ Which IP should the client use? They keep changing! ││ │└────────────────────────────────────────────────────────────────┘
┌────────────────────────────────────────────────────────────────┐│ The Solution: Services ││ ││ ┌───────────────────────────────────────────────────────┐ ││ │ Service: web-service │ ││ │ ClusterIP: 10.96.45.123 │ ││ │ (Never changes!) │ ││ │ │ ││ │ Selector: app=web │ ││ │ │ │ ││ │ ├──► Pod: web-def456 (10.244.2.8) │ ││ │ ├──► Pod: web-ghi789 (10.244.1.12) │ ││ │ └──► Pod: web-xyz999 (10.244.3.2) │ ││ └───────────────────────────────────────────────────────┘ ││ ││ Client always uses 10.96.45.123 - Kubernetes handles rest ││ │└────────────────────────────────────────────────────────────────┘1.2 Service Components
Section titled “1.2 Service Components”| Component | Description |
|---|---|
| ClusterIP | Stable internal IP address for the service |
| Selector | Labels that identify which pods to route to |
| Port | The port the service listens on |
| TargetPort | The port on the pods to forward traffic to |
| Endpoints | Actual pod IPs backing the service |
1.3 How Services Work
Section titled “1.3 How Services Work”┌────────────────────────────────────────────────────────────────┐│ Service Request Flow ││ ││ 1. Client sends request to Service IP (10.96.45.123:80) ││ │ ││ ▼ ││ 2. kube-proxy (on each node) intercepts ││ │ ││ ▼ ││ 3. kube-proxy uses iptables/nftables rules ││ │ ││ ▼ ││ 4. Request forwarded to one of the pod IPs ││ (load balanced - round robin by default) ││ │ ││ ▼ ││ 5. Pod receives request on targetPort ││ │└────────────────────────────────────────────────────────────────┘Pause and predict: You have a frontend deployment and a backend deployment. The frontend needs to call the backend, and external users need to reach the frontend. What service type would you choose for each, and why?
Part 2: Service Types
Section titled “Part 2: Service Types”2.1 The Four Service Types
Section titled “2.1 The Four Service Types”| Type | Scope | Use Case | Exam Frequency |
|---|---|---|---|
| ClusterIP | Internal only | Pod-to-pod communication | ⭐⭐⭐⭐⭐ |
| NodePort | External via node IP | Development, testing | ⭐⭐⭐⭐ |
| LoadBalancer | External via cloud LB | Production in cloud | ⭐⭐⭐ |
| ExternalName | DNS alias | External services | ⭐⭐ |
2.2 ClusterIP (Default)
Section titled “2.2 ClusterIP (Default)”# Internal-only access - most common typeapiVersion: v1kind: Servicemetadata: name: web-servicespec: type: ClusterIP # Default, can be omitted selector: app: web # Match pods with label app=web ports: - port: 80 # Service listens on port 80 targetPort: 8080 # Forward to pod port 8080┌────────────────────────────────────────────────────────────────┐│ ClusterIP Service ││ ││ Only accessible from within the cluster ││ ││ ┌────────────────┐ ┌────────────────┐ ││ │ Other Pod │───────►│ ClusterIP │ ││ │ (client) │ │ 10.96.45.123 │ ││ └────────────────┘ │ │ ││ │ ┌──────────┐ │ ││ │ │ Pod │ │ ││ │ │ app=web │ │ ││ ┌────────────────┐ │ └──────────┘ │ ││ │ External │───X───►│ │ ││ │ (blocked) │ │ ┌──────────┐ │ ││ └────────────────┘ │ │ Pod │ │ ││ │ │ app=web │ │ ││ │ └──────────┘ │ ││ └────────────────┘ ││ │└────────────────────────────────────────────────────────────────┘2.3 NodePort
Section titled “2.3 NodePort”# Exposes service on each node's IP at a static portapiVersion: v1kind: Servicemetadata: name: web-nodeportspec: type: NodePort selector: app: web ports: - port: 80 # ClusterIP port (internal) targetPort: 8080 # Pod port nodePort: 30080 # External port (30000-32767)┌────────────────────────────────────────────────────────────────┐│ NodePort Service ││ ││ External access via <NodeIP>:<NodePort> ││ ││ ┌─────────────────────────────────────────────────────────┐ ││ │ Cluster │ ││ │ │ ││ │ Node 1 (192.168.1.10) Node 2 (192.168.1.11) │ ││ │ ┌──────────────────┐ ┌──────────────────┐ │ ││ │ │ :30080 ──────────┼──────┼─► Pod (app=web) │ │ ││ │ └──────────────────┘ └──────────────────┘ │ ││ │ │ ││ └─────────────────────────────────────────────────────────┘ ││ ▲ ▲ ││ │ │ ││ External: 192.168.1.10:30080 OR 192.168.1.11:30080 ││ (Both work!) ││ │└────────────────────────────────────────────────────────────────┘2.4 LoadBalancer
Section titled “2.4 LoadBalancer”# Creates external load balancer (cloud provider)apiVersion: v1kind: Servicemetadata: name: web-lbspec: type: LoadBalancer selector: app: web ports: - port: 80 targetPort: 8080┌────────────────────────────────────────────────────────────────┐│ LoadBalancer Service ││ ││ Cloud provider creates an external load balancer ││ ││ ┌──────────────────┐ ││ │ Internet │ ││ └────────┬─────────┘ ││ │ ││ ▼ ││ ┌──────────────────┐ External IP: 34.85.123.45 ││ │ Cloud LB │ ││ │ (AWS/GCP/Azure)│ ││ └────────┬─────────┘ ││ │ ││ ▼ ││ ┌──────────────────────────────────────────────────┐ ││ │ NodePort (auto-created) │ ││ │ │ │ ││ │ ┌─────────────┼─────────────┐ │ ││ │ ▼ ▼ ▼ │ ││ │ ┌──────┐ ┌──────┐ ┌──────┐ │ ││ │ │ Pod │ │ Pod │ │ Pod │ │ ││ │ └──────┘ └──────┘ └──────┘ │ ││ └──────────────────────────────────────────────────┘ ││ │└────────────────────────────────────────────────────────────────┘2.5 ExternalName
Section titled “2.5 ExternalName”# DNS alias to external service (no proxying)apiVersion: v1kind: Servicemetadata: name: external-dbspec: type: ExternalName externalName: database.example.com # Returns CNAME record # No selector - points to external DNS name┌────────────────────────────────────────────────────────────────┐│ ExternalName Service ││ ││ DNS alias - no ClusterIP, no proxying ││ ││ ┌────────────────┐ ││ │ Pod │ ││ │ │──► DNS: external-db.default.svc ││ │ │ │ ││ └────────────────┘ │ Returns CNAME ││ ▼ ││ database.example.com ││ │ ││ ▼ ││ ┌──────────────────┐ ││ │ External DB │ ││ │ (outside K8s) │ ││ └──────────────────┘ ││ │└────────────────────────────────────────────────────────────────┘Part 3: Creating Services
Section titled “Part 3: Creating Services”3.1 Imperative Commands (Fast for Exam)
Section titled “3.1 Imperative Commands (Fast for Exam)”# Expose a deployment (most common exam task)k expose deployment nginx --port=80 --target-port=8080 --name=nginx-svc
# Expose with NodePortk expose deployment nginx --port=80 --type=NodePort --name=nginx-np
# Expose a podk expose pod nginx --port=80 --name=nginx-pod-svc
# Generate YAML without creatingk expose deployment nginx --port=80 --dry-run=client -o yaml > svc.yaml
# Create service for existing pods by selectork create service clusterip my-svc --tcp=80:80803.2 Expose Command Options
Section titled “3.2 Expose Command Options”# Full syntaxk expose deployment <name> \ --port=<service-port> \ --target-port=<pod-port> \ --type=<ClusterIP|NodePort|LoadBalancer> \ --name=<service-name> \ --protocol=<TCP|UDP>
# Examplesk expose deployment web --port=80 --target-port=8080k expose deployment web --port=80 --type=NodePortk expose deployment web --port=80 --type=LoadBalancer3.3 Declarative YAML
Section titled “3.3 Declarative YAML”# Complete service exampleapiVersion: v1kind: Servicemetadata: name: web-service labels: app: webspec: type: ClusterIP selector: app: web # MUST match pod labels tier: frontend ports: - name: http # Named port (good practice) port: 80 # Service port targetPort: 8080 # Pod port (can be name or number) protocol: TCP # TCP (default) or UDP3.4 Multi-Port Services
Section titled “3.4 Multi-Port Services”# Service with multiple portsapiVersion: v1kind: Servicemetadata: name: multi-port-svcspec: selector: app: web ports: - name: http # Required when multiple ports port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 - name: metrics port: 9090 targetPort: 9090Part 4: Service Discovery
Section titled “Part 4: Service Discovery”4.1 DNS-Based Discovery
Section titled “4.1 DNS-Based Discovery”Every service gets a DNS entry:
<service-name>- within same namespace<service-name>.<namespace>- cross-namespace<service-name>.<namespace>.svc.cluster.local- fully qualified
# From a pod in the same namespacecurl web-service
# From a pod in different namespacecurl web-service.production
# Fully qualified (always works)curl web-service.production.svc.cluster.local4.2 Environment Variables
Section titled “4.2 Environment Variables”Kubernetes injects service info into pods:
# Environment variables for service "web-service"WEB_SERVICE_SERVICE_HOST=10.96.45.123WEB_SERVICE_SERVICE_PORT=80
# Note: Only works for services created BEFORE the pod4.3 Finding Services
Section titled “4.3 Finding Services”# List servicesk get servicesk get svc # Short form
# Get service detailsk describe svc web-service
# Get service endpointsk get endpoints web-service
# Get service YAMLk get svc web-service -o yaml
# Find service ClusterIPk get svc web-service -o jsonpath='{.spec.clusterIP}'Part 5: Selectors and Endpoints
Section titled “Part 5: Selectors and Endpoints”5.1 How Selectors Work
Section titled “5.1 How Selectors Work”# Service selector MUST match pod labels exactly# Service:spec: selector: app: web tier: frontend
# Pod (will be selected):metadata: labels: app: web tier: frontend version: v2 # Extra labels OK
# Pod (will NOT be selected - missing tier):metadata: labels: app: web version: v25.2 Endpoints
Section titled “5.2 Endpoints”Endpoints are automatically created when pods match the selector:
# View endpoints (pod IPs backing the service)k get endpoints web-service# NAME ENDPOINTS AGE# web-service 10.244.1.5:8080,10.244.2.8:8080 5m
# Detailed endpoint infok describe endpoints web-service5.3 Service Without Selector
Section titled “5.3 Service Without Selector”Create a service that points to manual endpoints:
# Service without selectorapiVersion: v1kind: Servicemetadata: name: external-servicespec: ports: - port: 80 targetPort: 80---# Manual endpointsapiVersion: v1kind: Endpointsmetadata: name: external-service # Must match service namesubsets:- addresses: - ip: 192.168.1.100 # External IP - ip: 192.168.1.101 ports: - port: 80Use case: Pointing to external databases or services outside the cluster.
Stop and think: A developer tells you “my service isn’t working.” Before you touch the keyboard, what three things would you check first, and in what order? Think about the chain from Service to Endpoints to Pods.
Part 6: Debugging Services
Section titled “Part 6: Debugging Services”6.1 Service Debugging Workflow
Section titled “6.1 Service Debugging Workflow”Service Not Working? │ ├── kubectl get svc (check service exists) │ │ │ └── Check TYPE, CLUSTER-IP, EXTERNAL-IP, PORT │ ├── kubectl get endpoints <svc> (check endpoints) │ │ │ ├── No endpoints? → Selector doesn't match pods │ │ Check pod labels │ │ │ └── Endpoints exist? → Pods aren't responding │ Check pod health │ ├── kubectl describe svc <svc> (check selector) │ │ │ └── Verify selector matches pod labels │ └── Test from inside cluster: kubectl run test --rm -it --image=busybox -- wget -qO- <svc>6.2 Common Service Issues
Section titled “6.2 Common Service Issues”| Symptom | Cause | Solution |
|---|---|---|
| No endpoints | Selector doesn’t match pods | Fix selector or pod labels |
| Connection refused | Pod not listening on targetPort | Check pod port configuration |
| Timeout | Pod not running or crashlooping | Debug pod issues first |
| NodePort not accessible | Firewall blocking port | Check node firewall rules |
| Wrong service type | Using ClusterIP for external access | Change to NodePort/LoadBalancer |
6.3 Debugging Commands
Section titled “6.3 Debugging Commands”# Check service and endpointsk get svc,endpoints
# Verify selector matches podsk get pods --selector=app=web
# Test connectivity from within clusterk run test --rm -it --image=busybox:1.36 --restart=Never -- \ wget -qO- http://web-service
# Test with curlk run test --rm -it --image=curlimages/curl --restart=Never -- \ curl -s http://web-service
# Check DNS resolutionk run test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup web-service
# Check port on pod directlyk exec <pod> -- netstat -tlnp6.4 Advanced Debugging: Tracing kube-proxy Rules
Section titled “6.4 Advanced Debugging: Tracing kube-proxy Rules”While not strictly required for everyday administration, understanding how kube-proxy routes traffic is invaluable for advanced debugging. When a Service is created, kube-proxy configures netfilter rules (using iptables, ipvs, or nftables) on every node to intercept traffic to the virtual ClusterIP.
To practically trace a request from a client, through a Service, to a Pod using iptables-save:
# 1. Get the Service ClusterIPkubectl get svc web-service# Example IP: 10.96.45.123
# 2. SSH into a Kubernetes Nodessh user@node-01
# 3. Search iptables rules for the Service IPsudo iptables-save | grep 10.96.45.123# You will see a rule redirecting traffic to a KUBE-SVC-* chain:# -A KUBE-SERVICES -d 10.96.45.123/32 -p tcp -m tcp --dport 80 -j KUBE-SVC-XXXXXXXXXXXXXXXX
# 4. Inspect the KUBE-SVC chain to find the load balancing logicsudo iptables-save | grep KUBE-SVC-XXXXXXXXXXXXXXXX# You will see rules distributing traffic to KUBE-SEP-* chains (one for each Pod endpoint) using probabilities.
# 5. Inspect a KUBE-SEP (Service Endpoint) chain to find the actual Pod IPsudo iptables-save | grep KUBE-SEP-YYYYYYYYYYYYYYYY# You will see the DNAT rule translating the destination to the Pod IP:# -A KUBE-SEP-YYYYYYYYYYYYYYYY -p tcp -m tcp -j DNAT --to-destination 10.244.1.5:8080This demonstrates exactly how the “magic” of virtual IPs works under the hood. For clusters using nftables (the recommended replacement for IPVS in K8s 1.35+), you would use nft list ruleset | grep 10.96.45.123 to trace similar Network Address Translation (NAT) structures.
War Story: The Selector Mismatch
A developer spent hours debugging why their service had no endpoints. The deployment used
app: web-appbut the service selector wasapp: webapp(no hyphen). One character difference = zero connectivity. Always copy-paste selectors!
Part 7: Service Session Affinity
Section titled “Part 7: Service Session Affinity”7.1 Session Affinity Options
Section titled “7.1 Session Affinity Options”# Sticky sessions - route same client to same podapiVersion: v1kind: Servicemetadata: name: sticky-servicespec: selector: app: web sessionAffinity: ClientIP # None (default) or ClientIP sessionAffinityConfig: clientIP: timeoutSeconds: 10800 # 3 hours (default) ports: - port: 807.2 When to Use Session Affinity
Section titled “7.2 When to Use Session Affinity”| Scenario | Use Affinity? |
|---|---|
| Stateless API | No (default) |
| Shopping cart in pod memory | Yes (but better: use Redis) |
| WebSocket connections | Yes |
| Authentication sessions in memory | Yes (but better: external store) |
What would happen if: You create a Service with
sessionAffinity: ClientIPand then scale your deployment from 3 replicas to 1 replica. What happens to clients that were pinned to the deleted pods?
Traffic Distribution (K8s 1.35+)
Section titled “Traffic Distribution (K8s 1.35+)”Kubernetes 1.35 graduated PreferSameNode traffic distribution to GA, giving you fine-grained control over where service traffic is routed:
apiVersion: v1kind: Servicemetadata: name: latency-sensitivespec: selector: app: cache ports: - port: 6379 trafficDistribution: PreferSameNode # Route to local node first| Value | Behavior |
|---|---|
PreferSameNode | Strictly prefer endpoints on the same node, fall back to remote (GA in 1.35) |
PreferClose | Prefer endpoints topologically close — same zone when using topology-aware routing |
This is particularly useful for latency-sensitive workloads like caches, sidecars, and node-local services.
Common Mistakes
Section titled “Common Mistakes”| Mistake | Problem | Solution |
|---|---|---|
| Selector mismatch | Service has no endpoints | Ensure selector matches pod labels exactly |
| Port vs TargetPort confusion | Connection refused | Port = service, TargetPort = pod |
| Missing service type | Can’t access externally | Specify NodePort or LoadBalancer |
| Using ClusterIP externally | Connection timeout | ClusterIP is internal only |
| Forgetting namespace | Service not found | Use FQDN for cross-namespace |
-
A developer has a Service with
port: 80andtargetPort: 8080, but their app container listens on port 80. Users report “connection refused” when hitting the Service. What went wrong and how would you fix it?Answer
The `targetPort` (8080) does not match the port the container is actually listening on (80). When kube-proxy forwards traffic to the pod, it sends it to port 8080, but nothing is listening there. The fix is to either change `targetPort` to 80 in the Service spec, or change the container to listen on 8080. The key distinction: `port` is what clients use to reach the Service, `targetPort` is where the pod actually receives the traffic. -
You deploy a new microservice and create a Service for it, but
kubectl get endpointsshows<none>. The pods are running and show1/1 READY. Walk through your debugging process.Answer
Since pods are running and ready, the most likely cause is a selector mismatch. First, check the Service selector with `k get svc-o yaml | grep -A5 selector`. Then compare with pod labels using `k get pods --show-labels`. Even a single character difference (e.g., `app: web-app` vs `app: webapp`) will cause zero endpoints. Also check that the Service and pods are in the same namespace -- Services only select pods within their own namespace. -
A developer created a ClusterIP Service for their frontend app but external users can’t reach it. They ask you to fix it. What’s wrong, what are the options, and what trade-offs should you consider?
Answer
ClusterIP is internal-only and cannot be reached from outside the cluster. The options are: (1) Change to NodePort -- free, but uses high ports (30000-32767) and exposes on every node; (2) Change to LoadBalancer -- clean external IP, but costs money per LB in cloud environments; (3) Put an Ingress or Gateway in front -- single entry point for many services with path/host routing, but requires an Ingress controller. For production, Ingress/Gateway is usually the right choice because it consolidates external access through one load balancer. -
During a CKA exam, you need to expose a deployment called
payment-apias a NodePort service on port 80, targeting container port 3000, with a specific NodePort of 30100. Write the command and explain what happens if you omit the--target-portflag.Answer
The imperative approach requires YAML since `kubectl expose` cannot set a specific nodePort. Use: `k expose deployment payment-api --port=80 --target-port=3000 --type=NodePort --dry-run=client -o yaml > svc.yaml`, then edit the YAML to add `nodePort: 30100` and apply it. If you omit `--target-port`, it defaults to the same value as `--port` (80), so traffic would be forwarded to port 80 on the pod instead of 3000, resulting in connection refused if the app listens on 3000. -
Your team runs services in namespaces
frontend,backend, anddatabase. A pod infrontendneeds to call serviceapiinbackend. It works withcurl api.backendbut fails with justcurl api. Explain why and when you’d use the full FQDN instead.Answer
The short name `api` only works within the same namespace because the search domain in `/etc/resolv.conf` appends the pod's own namespace first (`api.frontend.svc.cluster.local`), which does not exist. Using `api.backend` works because the search domain appends `.svc.cluster.local` to make `api.backend.svc.cluster.local`. You would use the full FQDN (`api.backend.svc.cluster.local`) in application configuration files for clarity and to avoid ambiguity, especially in production where misconfigured search domains could silently route to the wrong service.
Hands-On Exercise
Section titled “Hands-On Exercise”Task: Create and debug services for a multi-tier application.
Steps:
- Create a backend deployment:
k create deployment backend --image=nginx --replicas=2k set env deployment/backend APP=backend- Label the pods properly:
k label deployment backend tier=backend- Expose backend as ClusterIP:
k expose deployment backend --port=80 --name=backend-svc- Verify the service:
k get svc backend-svck get endpoints backend-svc- Create a frontend deployment:
k create deployment frontend --image=nginx --replicas=2- Expose frontend as NodePort:
k expose deployment frontend --port=80 --type=NodePort --name=frontend-svc- Test internal connectivity:
# From a test pod, reach the backend servicek run test --rm -it --image=busybox:1.36 --restart=Never -- \ wget -qO- http://backend-svc- Test cross-namespace:
# Create another namespace and testk create namespace otherk run test -n other --rm -it --image=busybox:1.36 --restart=Never -- \ wget -qO- http://backend-svc.default- Debug a broken service:
# Create a service with wrong selectork create service clusterip broken-svc --tcp=80:80# Check endpoints (should be empty)k get endpoints broken-svc# Fix by creating proper servicek delete svc broken-svck expose deployment backend --port=80 --name=broken-svc --selector=app=backendk get endpoints broken-svc- Cleanup:
k delete deployment frontend backendk delete svc backend-svc frontend-svc broken-svck delete namespace otherSuccess Criteria:
- Can create ClusterIP and NodePort services
- Understand port vs targetPort
- Can debug services with no endpoints
- Can access services across namespaces
- Understand when to use each service type
Practice Drills
Section titled “Practice Drills”Drill 1: Service Creation Speed (Target: 2 minutes)
Section titled “Drill 1: Service Creation Speed (Target: 2 minutes)”Create services for a deployment as fast as possible:
# Setupk create deployment drill-app --image=nginx --replicas=2
# Create ClusterIP servicek expose deployment drill-app --port=80 --name=drill-clusterip
# Create NodePort servicek expose deployment drill-app --port=80 --type=NodePort --name=drill-nodeport
# Verify bothk get svc drill-clusterip drill-nodeport
# Generate YAMLk expose deployment drill-app --port=80 --dry-run=client -o yaml > svc.yaml
# Cleanupk delete deployment drill-appk delete svc drill-clusterip drill-nodeportrm svc.yamlDrill 2: Multi-Port Service (Target: 3 minutes)
Section titled “Drill 2: Multi-Port Service (Target: 3 minutes)”# Create deploymentk create deployment multi-port --image=nginx
# Create multi-port service from YAMLcat << 'EOF' | k apply -f -apiVersion: v1kind: Servicemetadata: name: multi-port-svcspec: selector: app: multi-port ports: - name: http port: 80 targetPort: 80 - name: https port: 443 targetPort: 443EOF
# Verifyk describe svc multi-port-svc
# Cleanupk delete deployment multi-portk delete svc multi-port-svcDrill 3: Service Discovery (Target: 3 minutes)
Section titled “Drill 3: Service Discovery (Target: 3 minutes)”# Create servicek create deployment web --image=nginxk expose deployment web --port=80
# Test DNS resolutionk run dns-test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup web
# Test full FQDNk run dns-test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup web.default.svc.cluster.local
# Test connectivityk run curl-test --rm -it --image=curlimages/curl --restart=Never -- \ curl -s http://web
# Cleanupk delete deployment webk delete svc webDrill 4: Endpoint Debugging (Target: 4 minutes)
Section titled “Drill 4: Endpoint Debugging (Target: 4 minutes)”# Create deployment with specific labelsk create deployment endpoint-test --image=nginxk label deployment endpoint-test tier=web --overwrite
# Create service with WRONG selector (intentionally broken)cat << 'EOF' | k apply -f -apiVersion: v1kind: Servicemetadata: name: broken-endpointsspec: selector: app: wrong-label # This won't match! ports: - port: 80EOF
# Observe: no endpointsk get endpoints broken-endpoints# ENDPOINTS: <none>
# Debug: check what selector should bek get pods --show-labels
# Fix: delete and recreate with correct selectork delete svc broken-endpointsk expose deployment endpoint-test --port=80 --name=fixed-endpoints
# Verify: endpoints exist nowk get endpoints fixed-endpoints
# Cleanupk delete deployment endpoint-testk delete svc fixed-endpointsDrill 5: Cross-Namespace Access (Target: 3 minutes)
Section titled “Drill 5: Cross-Namespace Access (Target: 3 minutes)”# Create service in default namespacek create deployment app --image=nginxk expose deployment app --port=80
# Create other namespacek create namespace testing
# Access from other namespace - short formk run test -n testing --rm -it --image=busybox:1.36 --restart=Never -- \ wget -qO- http://app.default
# Access with FQDNk run test -n testing --rm -it --image=busybox:1.36 --restart=Never -- \ wget -qO- http://app.default.svc.cluster.local
# Cleanupk delete deployment appk delete svc appk delete namespace testingDrill 6: NodePort Specific Port (Target: 3 minutes)
Section titled “Drill 6: NodePort Specific Port (Target: 3 minutes)”# Create deploymentk create deployment nodeport-test --image=nginx
# Create NodePort with specific portcat << 'EOF' | k apply -f -apiVersion: v1kind: Servicemetadata: name: specific-nodeportspec: type: NodePort selector: app: nodeport-test ports: - port: 80 targetPort: 80 nodePort: 30080 # Specific portEOF
# Verify portk get svc specific-nodeport# Should show 80:30080/TCP
# Cleanupk delete deployment nodeport-testk delete svc specific-nodeportDrill 7: ExternalName Service (Target: 2 minutes)
Section titled “Drill 7: ExternalName Service (Target: 2 minutes)”# Create ExternalName servicecat << 'EOF' | k apply -f -apiVersion: v1kind: Servicemetadata: name: external-apispec: type: ExternalName externalName: api.example.comEOF
# Check the service (no ClusterIP!)k get svc external-api# Note: CLUSTER-IP shows as <none>
# Test DNS resolutionk run test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup external-api# Shows CNAME to api.example.com
# Cleanupk delete svc external-apiDrill 8: Challenge - Complete Service Workflow
Section titled “Drill 8: Challenge - Complete Service Workflow”Without looking at solutions:
- Create deployment
challenge-appwith nginx, 3 replicas - Expose as ClusterIP service on port 80
- Verify endpoints show 3 pod IPs
- Scale deployment to 5 replicas
- Verify endpoints now show 5 pod IPs
- Change service to NodePort type
- Get the NodePort number
- Cleanup everything
# YOUR TASK: Complete in under 5 minutesSolution
# 1. Create deploymentk create deployment challenge-app --image=nginx --replicas=3
# 2. Expose as ClusterIPk expose deployment challenge-app --port=80
# 3. Verify 3 endpointsk get endpoints challenge-app
# 4. Scale to 5k scale deployment challenge-app --replicas=5
# 5. Verify 5 endpointsk get endpoints challenge-app
# 6. Change to NodePort (delete and recreate)k delete svc challenge-appk expose deployment challenge-app --port=80 --type=NodePort
# 7. Get NodePortk get svc challenge-app -o jsonpath='{.spec.ports[0].nodePort}'
# 8. Cleanupk delete deployment challenge-appk delete svc challenge-appNext Module
Section titled “Next Module”Module 3.2: Endpoints & EndpointSlices - Deep-dive into how services track pods.