Module 5.1: Services
Complexity:
[MEDIUM]- Core networking concept, multiple types to understandTime to Complete: 45-55 minutes
Prerequisites: Module 1.1 (Pods), Module 2.1 (Deployments), understanding of basic networking
Learning Outcomes
Section titled “Learning Outcomes”After completing this module, you will be able to:
- Create ClusterIP, NodePort, and LoadBalancer Services to expose applications
- Debug Service connectivity issues using endpoint inspection, DNS resolution, and port verification
- Explain how Services use label selectors to route traffic to the correct pods
- Compare Service types and choose the appropriate one for internal vs external access patterns
Why This Module Matters
Section titled “Why This Module Matters”Services provide stable networking for pods. Since pods are ephemeral and get new IPs when recreated, you need Services to provide consistent access to your applications. Services are fundamental to how applications communicate in Kubernetes.
The CKAD exam tests:
- Creating Services (ClusterIP, NodePort, LoadBalancer)
- Understanding Service discovery
- Debugging Service connectivity
- Working with endpoints
The Phone Directory Analogy
Services are like a company phone directory. Employees (pods) come and go, change desks (IPs), but the department extension (Service) stays the same. When you call “Sales” (Service name), the system routes to whoever is currently working there. The directory (DNS) translates names to numbers, and the switchboard (kube-proxy) routes the call.
Service Types
Section titled “Service Types”ClusterIP (Default)
Section titled “ClusterIP (Default)”Internal-only access within the cluster:
apiVersion: v1kind: Servicemetadata: name: my-servicespec: type: ClusterIP # Default, can be omitted selector: app: my-app ports: - port: 80 # Service port targetPort: 8080 # Container port# Create imperativelyk expose deployment my-app --port=80 --target-port=8080
# Access from within clustercurl http://my-service:80curl http://my-service.default.svc.cluster.local:80NodePort
Section titled “NodePort”Exposes on each node’s IP at a static port:
apiVersion: v1kind: Servicemetadata: name: my-nodeportspec: type: NodePort selector: app: my-app ports: - port: 80 # Service port (ClusterIP) targetPort: 8080 # Container port nodePort: 30080 # Node port (30000-32767)# Create imperativelyk expose deployment my-app --type=NodePort --port=80 --target-port=8080
# Access from outside clustercurl http://<node-ip>:30080LoadBalancer
Section titled “LoadBalancer”Provisions external load balancer (cloud environments):
apiVersion: v1kind: Servicemetadata: name: my-loadbalancerspec: type: LoadBalancer selector: app: my-app ports: - port: 80 targetPort: 8080# Create imperativelyk expose deployment my-app --type=LoadBalancer --port=80 --target-port=8080
# Get external IPk get svc my-loadbalancer# EXTERNAL-IP column shows the LB IPPause and predict: You have a Deployment with 3 replicas labeled
app: web. You create a Service with selectorapp: webapp. How many endpoints will the Service have? Why?
ExternalName
Section titled “ExternalName”Maps to external DNS name (no proxying):
apiVersion: v1kind: Servicemetadata: name: external-dbspec: type: ExternalName externalName: database.example.comService Discovery
Section titled “Service Discovery”DNS Names
Section titled “DNS Names”Kubernetes creates DNS records for Services:
<service-name>.<namespace>.svc.cluster.local| DNS Name | Resolves To |
|---|---|
my-service | Same namespace |
my-service.default | default namespace |
my-service.default.svc | default namespace, svc |
my-service.default.svc.cluster.local | Full FQDN |
Environment Variables
Section titled “Environment Variables”Pods get environment variables for Services that existed when the pod started:
# Inside a podenv | grep MY_SERVICE# MY_SERVICE_SERVICE_HOST=10.96.0.1# MY_SERVICE_SERVICE_PORT=80Visualization
Section titled “Visualization”┌─────────────────────────────────────────────────────────────┐│ Service Types │├─────────────────────────────────────────────────────────────┤│ ││ ClusterIP (Internal Only) ││ ┌─────────────────────────────────────┐ ││ │ cluster.local:80 ──► Pod:8080 │ ││ │ ──► Pod:8080 │ ││ │ ──► Pod:8080 │ ││ └─────────────────────────────────────┘ ││ ││ NodePort (ClusterIP + Node Access) ││ ┌─────────────────────────────────────┐ ││ │ <NodeIP>:30080 ──► ClusterIP:80 ──► Pods ││ └─────────────────────────────────────┘ ││ ││ LoadBalancer (NodePort + External LB) ││ ┌─────────────────────────────────────┐ ││ │ <ExternalIP>:80 ──► NodePort ──► ClusterIP ──► Pods ││ └─────────────────────────────────────┘ ││ ││ Service Port Flow: ││ ┌──────────────────────────────────────────────────┐ ││ │ │ ││ │ External ──► nodePort ──► port ──► targetPort │ ││ │ :80 :30080 :80 :8080 │ ││ │ │ ││ └──────────────────────────────────────────────────┘ ││ │└─────────────────────────────────────────────────────────────┘Selectors and Endpoints
Section titled “Selectors and Endpoints”How Services Find Pods
Section titled “How Services Find Pods”Services use label selectors to find pods:
# Servicespec: selector: app: my-app tier: frontend
# Pod (must match ALL labels)metadata: labels: app: my-app tier: frontendEndpoints
Section titled “Endpoints”Endpoints are automatically created/updated:
# View endpointsk get endpoints my-service# NAME ENDPOINTS AGE# my-service 10.244.0.5:8080,10.244.0.6:8080 5m
# Describe shows pod IPsk describe endpoints my-serviceStop and think: What is the difference between
port,targetPort, andnodePortin a Service spec? If you only specifyport: 80and omittargetPort, what value doestargetPortdefault to?
No Matching Pods?
Section titled “No Matching Pods?”If selector doesn’t match any pods:
k get endpoints my-service# NAME ENDPOINTS AGE# my-service <none> 5mHeadless Services
Section titled “Headless Services”For direct pod discovery without load balancing:
apiVersion: v1kind: Servicemetadata: name: headless-svcspec: clusterIP: None # Makes it headless selector: app: my-app ports: - port: 80DNS returns all pod IPs instead of the Service IP:
# Returns multiple A records (one per pod)nslookup headless-svc.default.svc.cluster.localUse cases: StatefulSets, databases, peer discovery.
Multi-Port Services
Section titled “Multi-Port Services”apiVersion: v1kind: Servicemetadata: name: multi-portspec: selector: app: my-app ports: - name: http # Name required for multi-port port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443Session Affinity
Section titled “Session Affinity”Route same client to same pod:
apiVersion: v1kind: Servicemetadata: name: sticky-servicespec: selector: app: my-app sessionAffinity: ClientIP sessionAffinityConfig: clientIP: timeoutSeconds: 10800 ports: - port: 80Quick Reference
Section titled “Quick Reference”# Create Servicek expose deployment NAME --port=80 --target-port=8080k expose deployment NAME --type=NodePort --port=80k expose deployment NAME --type=LoadBalancer --port=80
# View Servicesk get svck describe svc NAME
# View Endpointsk get endpoints NAMEk get ep NAME
# Debug DNSk run tmp --image=busybox --rm -it --restart=Never -- nslookup my-service
# Test connectivityk run tmp --image=busybox --rm -it --restart=Never -- wget -qO- my-service:80Did You Know?
Section titled “Did You Know?”-
kube-proxy doesn’t actually proxy traffic. Despite its name, it configures iptables/IPVS rules. Traffic flows directly from source to destination pod.
-
Services exist cluster-wide even though they’re namespaced. The DNS name includes namespace, but the underlying ClusterIP works across namespaces.
-
NodePort uses ALL nodes. Even nodes without the target pods will forward traffic to the correct pod.
-
The port range 30000-32767 is configurable via kube-apiserver’s
--service-node-port-rangeflag.
Common Mistakes
Section titled “Common Mistakes”| Mistake | Why It Hurts | Solution |
|---|---|---|
| Selector doesn’t match pod labels | Service has no endpoints | k get ep to verify, fix labels |
| Wrong targetPort | Connection refused | Match container’s listening port |
| Using pod IP instead of Service | Breaks when pod restarts | Always use Service name/IP |
| Forgetting namespace in DNS | Can’t reach service | Use svc.namespace or full FQDN |
| NodePort without firewall rule | Can’t access from outside | Open node port in cloud firewall |
-
A developer creates a Service with
port: 80andtargetPort: 8080. Clients connect to the Service on port 80 but get “connection refused.” The pods are Running and the application listens on port 80 (not 8080). What’s wrong and how do you fix it?Answer
The `targetPort` is where the Service forwards traffic to — it must match the port the application actually listens on inside the container. The Service is forwarding to port 8080 but the app listens on port 80, so the connection is refused at the pod level. Fix by changing `targetPort: 80` to match the application's listening port. Remember: `port` is what clients use to reach the Service, and `targetPort` is what the pod is actually listening on. They can be the same or different values. -
After deploying a new application,
kubectl get endpoints myserviceshows<none>even though 3 pods are Running and Ready. The Service was created withkubectl expose deployment myapp --port=80. What is the most likely cause?Answer
The Service selector doesn't match the pod labels. `kubectl expose` creates a Service with a selector matching the deployment's pod template labels. If the deployment name is `myapp`, the pods have `app: myapp` labels, and the Service selector is `app: myapp`. But if the pods were created separately or the labels were changed, the selector won't match. Debug by comparing: `kubectl describe svc myservice | grep Selector` and `kubectl get pods --show-labels`. Fix by patching the Service selector to match the actual pod labels, or correcting the pod labels to match the Service selector. -
A microservice in the
ordersnamespace needs to call a service namedpaymentsin thebillingnamespace. The developer triescurl http://payments:80from inside a pod and gets a DNS resolution failure. What URL should they use?Answer
Short DNS names (like `payments`) only resolve within the same namespace. To reach a Service in a different namespace, use `payments.billing` or the full FQDN `payments.billing.svc.cluster.local`. The DNS hierarchy in Kubernetes is `. .svc.cluster.local`. When you omit the namespace, the pod's own namespace is used for resolution. This is a very common debugging scenario — cross-namespace communication always requires the namespace in the DNS name. -
A team exposes their application with a NodePort Service. External users can reach the app on
node1:30080but not onnode2:30080, even though both nodes are healthy. What should you check?Answer
NodePort Services listen on ALL nodes in the cluster, regardless of where the pods run. If `node2:30080` doesn't respond, the issue is likely a firewall or cloud security group rule blocking port 30080 on node2. Check: (1) `kubectl get svc` to confirm the NodePort is correctly assigned, (2) cloud provider security groups or firewall rules for all nodes, (3) `kubectl get endpoints` to verify the Service has healthy endpoints. kube-proxy configures iptables/IPVS on every node to forward NodePort traffic to the correct pod, even if the pod runs on a different node. The network path from client to node to pod is the key thing to trace.
Hands-On Exercise
Section titled “Hands-On Exercise”Task: Create and test different Service types.
Setup:
# Create a deploymentk create deployment web --image=nginx --replicas=3
# Wait for podsk wait --for=condition=Ready pod -l app=web --timeout=60sPart 1: ClusterIP Service
# Create ClusterIP servicek expose deployment web --port=80 --target-port=80
# Verify endpointsk get endpoints web
# Test from within clusterk run test --image=busybox --rm -it --restart=Never -- wget -qO- web:80
# Check DNSk run test --image=busybox --rm -it --restart=Never -- nslookup web.default.svc.cluster.localPart 2: NodePort Service
# Delete ClusterIP servicek delete svc web
# Create NodePort servicek expose deployment web --type=NodePort --port=80 --target-port=80
# Get assigned NodePortk get svc web -o jsonpath='{.spec.ports[0].nodePort}'echo
# Test (if you have node access)# curl http://<node-ip>:<nodeport>Part 3: Debug No Endpoints
# Create service with wrong selectorcat << 'EOF' | k apply -f -apiVersion: v1kind: Servicemetadata: name: broken-svcspec: selector: app: wrong-label ports: - port: 80EOF
# Check endpoints (should be empty)k get endpoints broken-svc
# Fix by patching selectork patch svc broken-svc -p '{"spec":{"selector":{"app":"web"}}}'
# Verify endpoints now existk get endpoints broken-svcCleanup:
k delete deployment webk delete svc web broken-svcPractice Drills
Section titled “Practice Drills”Drill 1: Create ClusterIP Service (Target: 1 minute)
Section titled “Drill 1: Create ClusterIP Service (Target: 1 minute)”k create deployment drill1 --image=nginxk expose deployment drill1 --port=80k get svc drill1k get ep drill1k delete deploy drill1 svc drill1Drill 2: Create NodePort Service (Target: 2 minutes)
Section titled “Drill 2: Create NodePort Service (Target: 2 minutes)”k create deployment drill2 --image=nginxk expose deployment drill2 --type=NodePort --port=80 --target-port=80
# Get NodePortk get svc drill2 -o jsonpath='{.spec.ports[0].nodePort}'echo
k delete deploy drill2 svc drill2Drill 3: Test DNS Resolution (Target: 2 minutes)
Section titled “Drill 3: Test DNS Resolution (Target: 2 minutes)”k create deployment drill3 --image=nginxk expose deployment drill3 --port=80
# Test DNSk run dns-test --image=busybox --rm -it --restart=Never -- nslookup drill3
k delete deploy drill3 svc drill3Drill 4: Service with Named Port (Target: 2 minutes)
Section titled “Drill 4: Service with Named Port (Target: 2 minutes)”cat << 'EOF' | k apply -f -apiVersion: v1kind: Servicemetadata: name: drill4spec: selector: app: drill4 ports: - name: http port: 80 targetPort: 80 - name: metrics port: 9090 targetPort: 9090---apiVersion: apps/v1kind: Deploymentmetadata: name: drill4spec: replicas: 2 selector: matchLabels: app: drill4 template: metadata: labels: app: drill4 spec: containers: - name: nginx image: nginxEOF
k get svc drill4k get ep drill4k delete deploy drill4 svc drill4Drill 5: Debug Service Connectivity (Target: 3 minutes)
Section titled “Drill 5: Debug Service Connectivity (Target: 3 minutes)”# Create deployment and broken servicek create deployment drill5 --image=nginxcat << 'EOF' | k apply -f -apiVersion: v1kind: Servicemetadata: name: drill5spec: selector: app: wrong ports: - port: 80EOF
# Debugk get ep drill5 # No endpointsk get pods --show-labels # Check pod labelsk describe svc drill5 | grep Selector # Check service selector
# Fixk patch svc drill5 -p '{"spec":{"selector":{"app":"drill5"}}}'k get ep drill5 # Should now have endpoints
k delete deploy drill5 svc drill5Drill 6: Cross-Namespace Service Access (Target: 3 minutes)
Section titled “Drill 6: Cross-Namespace Service Access (Target: 3 minutes)”# Create namespace and servicek create ns drill6k create deployment drill6-app --image=nginx -n drill6k expose deployment drill6-app --port=80 -n drill6
# Access from default namespacek run test --image=busybox --rm -it --restart=Never -- wget -qO- drill6-app.drill6:80
k delete ns drill6Next Module
Section titled “Next Module”Module 5.2: Ingress - HTTP routing and TLS termination.