Skip to content

Module 1.5: Services - Stable Networking

Hands-On Lab Available
K8s Cluster beginner 25 min
Launch Lab ↗

Opens in Killercoda in a new tab

Complexity: [MEDIUM] - Essential networking concept

Time to Complete: 35-40 minutes

Prerequisites: Module 4 (Deployments)


After this module, you will be able to:

  • Create Services to expose pods and explain why pods need Services (pod IPs are ephemeral)
  • Choose between ClusterIP, NodePort, and LoadBalancer and explain when to use each
  • Test service connectivity from inside the cluster using curl and DNS names
  • Debug a service that can’t reach its pods by checking labels, selectors, and endpoints

It was Black Friday, and the e-commerce platform was struggling. The engineering team noticed their payment processing pods were crashing and restarting due to memory leaks. While Kubernetes successfully recreated the pods to maintain capacity, the frontend application was hardcoded to talk to the old pod IP addresses. Every time a payment pod restarted, transactions failed until an engineer manually updated the frontend configuration with the new IP. They were losing thousands of dollars a minute because their internal networking couldn’t adapt to ephemeral infrastructure.

Pods are ephemeral—they come and go, each with a different IP address. Services provide stable networking: a fixed IP and DNS name that routes to your Pods, no matter how many there are or how often they change.


┌─────────────────────────────────────────────────────────────┐
│ WITHOUT SERVICES │
├─────────────────────────────────────────────────────────────┤
│ │
│ Pod IPs change constantly: │
│ │
│ Time 0: [Pod: 10.1.0.5] │
│ Time 1: Pod crashes, recreated │
│ Time 2: [Pod: 10.1.0.9] ← Different IP! │
│ │
│ Problem: How do other apps find your Pod? │
│ │
├─────────────────────────────────────────────────────────────┤
│ WITH SERVICES │
├─────────────────────────────────────────────────────────────┤
│ │
│ Service: my-app.default.svc.cluster.local │
│ ClusterIP: 10.96.0.100 (stable!) │
│ │ │
│ ┌────────┴────────┐ │
│ ▼ ▼ │
│ [Pod: 10.1.0.5] [Pod: 10.1.0.9] │
│ │
│ Service routes to healthy pods, IPs don't matter │
│ │
└─────────────────────────────────────────────────────────────┘

Stop and think: If a Deployment scales up to 10 Pods, how many IP addresses does the associated Service have? (Answer: Just one. The Service maintains a single, stable IP address while distributing traffic among all 10 backing Pods.)


Terminal window
# Expose a deployment
kubectl expose deployment nginx --port=80
# With specific type
kubectl expose deployment nginx --port=80 --type=NodePort
# Check the service
kubectl get services
kubectl get svc # Short form
service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx # Match pod labels
ports:
- port: 80 # Service port
targetPort: 80 # Container port
type: ClusterIP # Default type
Terminal window
kubectl apply -f service.yaml

Choosing the right Service type is critical for security and architecture.

TypeAccessibilityBest ForTrade-off
ClusterIPInternal onlyBackend databases, internal APIsCannot be reached from outside the cluster.
NodePortExternal (High Port)Quick debugging, bare-metal clustersExposes high ports (30000+), hard for external clients to use.
LoadBalancerExternal (Standard Port)Public-facing web apps in the CloudCosts money per Service, relies on an external cloud provider.

Internal-only access within the cluster:

apiVersion: v1
kind: Service
metadata:
name: internal-api
spec:
type: ClusterIP # Default, can omit
selector:
app: api
ports:
- port: 80
targetPort: 8080
Terminal window
# Access from within cluster only
curl http://internal-api:80

Exposes on every node’s IP at a static port:

apiVersion: v1
kind: Service
metadata:
name: web-nodeport
spec:
type: NodePort
selector:
app: web
ports:
- port: 80
targetPort: 80
nodePort: 30080 # Optional: 30000-32767
Terminal window
# Access from outside cluster
curl http://<node-ip>:30080

Creates external load balancer (cloud environments):

apiVersion: v1
kind: Service
metadata:
name: web-lb
spec:
type: LoadBalancer
selector:
app: web
ports:
- port: 80
targetPort: 80
Terminal window
# Get external IP (cloud only)
kubectl get svc web-lb
# EXTERNAL-IP column shows the load balancer IP

┌─────────────────────────────────────────────────────────────┐
│ SERVICE TYPES │
├─────────────────────────────────────────────────────────────┤
│ │
│ ClusterIP (Internal Only) │
│ ┌─────────────────────────────────────┐ │
│ │ ClusterIP:80 ──► Pod:8080 │ │
│ │ ──► Pod:8080 │ │
│ │ (Accessible only within cluster) │ │
│ └─────────────────────────────────────┘ │
│ │
│ NodePort (External via Node) │
│ ┌─────────────────────────────────────┐ │
│ │ <NodeIP>:30080 ──► ClusterIP:80 ──► Pods │
│ │ (Accessible from outside) │ │
│ └─────────────────────────────────────┘ │
│ │
│ LoadBalancer (Cloud External) │
│ ┌─────────────────────────────────────┐ │
│ │ <ExternalIP>:80 ──► NodePort ──► ClusterIP ──► Pods │
│ │ (Cloud provider manages LB) │ │
│ └─────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘

Kubernetes creates DNS entries for Services:

<service-name>.<namespace>.svc.cluster.local
Terminal window
# From any pod, you can reach:
curl nginx # Same namespace
curl nginx.default # Explicit namespace
curl nginx.default.svc # More explicit
curl nginx.default.svc.cluster.local # Full FQDN
Terminal window
# Create deployment and service
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80
# Test DNS from another pod
kubectl run test --image=busybox --rm -it -- wget -qO- nginx
# Returns nginx HTML!
# Test with full DNS name
kubectl run test --image=busybox --rm -it -- nslookup nginx.default.svc.cluster.local

Services use label selectors:

# Service
spec:
selector:
app: nginx
tier: frontend
# Pod (must match ALL labels)
metadata:
labels:
app: nginx
tier: frontend

Pause and predict: What happens to your Service if you manually edit a running Pod and remove the tier: frontend label? (Answer: The Service immediately drops that Pod from its endpoints list because it no longer perfectly matches the selector, and no further traffic will be routed to it.)

Terminal window
# Check what pods a service targets
kubectl get endpoints nginx
# Shows IP:Port of matched pods

spec:
ports:
- port: 80 # Service port (what clients use)
targetPort: 8080 # Container port (where app listens)
protocol: TCP # TCP (default) or UDP
┌─────────────────────────────────────────────────────────────┐
│ Client ──► Service:80 ──► Pod:8080 │
│ │ │ │
│ "port" "targetPort" │
└─────────────────────────────────────────────────────────────┘

Tales from the Trenches: The Phantom Outage

Section titled “Tales from the Trenches: The Phantom Outage”

A major streaming company experienced a bizarre outage where exactly 10% of user requests to their video catalog failed with connection timeouts. The pods were all showing as healthy, and the Service was active. After hours of debugging, a senior engineer ran kubectl get endpoints catalog-service.

They discovered 10 endpoints, but one of the IP addresses belonged to a pod that had been manually deleted directly via the container runtime (Docker), bypassing Kubernetes entirely. The Service’s underlying iptables rules were still routing traffic to a dead IP! The fix? Restarting the kube-proxy component on the affected node to flush the stale routing rules. The lesson: Always let Kubernetes manage your pod lifecycle, and always check your endpoints when traffic vanishes!


  • Services use iptables or IPVS. kube-proxy sets up rules that route Service IPs to Pod IPs. No actual proxy process handles each connection.
  • ClusterIP is virtual. No network interface has this IP. It only exists in iptables rules.
  • NodePort uses ALL nodes. Even nodes without target pods will route traffic correctly to the right node.
  • Services load balance randomly by default. Each connection might hit a different pod.

MistakeWhy It HurtsSolution
Selector doesn’t match pod labelsService has no endpoints and traffic drops into a black hole.Check kubectl get endpoints <service-name> to verify pods are matched.
Wrong targetPortConnection refused errors because the service sends traffic to a port where nothing is listening.Ensure targetPort matches the container’s actual listening port.
Using pod IP instead of service nameBreaks your application the moment a pod restarts and gets a new IP.Always configure apps to use the Service DNS name.
Forgetting to set protocol: UDPDNS or custom UDP services fail because Services default to TCP routing.Explicitly define protocol: UDP in the port configuration.
Exposing every microservice as a LoadBalancerSkyrocketing cloud bills, as each LoadBalancer provisions a costly external cloud resource.Use ClusterIP for internal services and an Ingress for HTTP routing.
Misconfiguring named portsServices fail to route if the targetPort string doesn’t perfectly match the container’s port name.Double-check spelling and case between the Service targetPort and Pod ports.name.
Using NodePort for production public trafficDifficult to manage, requires clients to know non-standard ports (30000+), and lacks advanced routing.Use LoadBalancer or Ingress for production external access.

  1. Scenario: A junior developer hardcodes the IP address of a backend database pod into the frontend configuration. The next day, the frontend cannot reach the database, even though the database pod is running perfectly. Why did this happen, and what is the Kubernetes-native solution?

    Answer Pods are ephemeral, meaning they are frequently destroyed and recreated by controllers like Deployments. When the database pod was recreated (due to a node update or crash), it received a new IP address, breaking the hardcoded frontend configuration. The solution is to create a Kubernetes Service for the database, which provides a stable, unchanging IP address and DNS name that the frontend can reliably use, regardless of pod churn.
  2. Scenario: You are deploying a Redis cache that should strictly only be accessed by your backend API pods running in the same cluster. Security mandates that this cache must not be reachable from the public internet. Which Service type should you choose and why?

    Answer You should choose `ClusterIP`, which is the default Service type in Kubernetes. A ClusterIP service assigns an internal IP address that is only routable from within the cluster itself. This perfectly satisfies the security requirement by preventing any external ingress traffic from reaching the Redis cache, while allowing the backend API pods to communicate with it seamlessly.
  3. Scenario: You’ve deployed a new web application and created a Service for it. However, when you try to access the Service, you get a “connection refused” error. You run kubectl get pods --show-labels and see your pods have app=frontend,env=prod. Your Service has a selector of app=frontend,tier=web. Why is the traffic failing?

    Answer Services use label selectors to identify which Pods should receive traffic. For a Service to route traffic to a Pod, the Pod must possess *all* the labels specified in the Service's selector. In this scenario, the Service is looking for Pods with `tier=web`, but the Pods do not have this label. As a result, the Service has zero endpoints and drops the traffic. You must update either the Pod labels or the Service selector to match perfectly.
  4. Scenario: A developer is troubleshooting an issue from within a busybox testing pod in the default namespace. They need to test connectivity to a payment API Service that resides in the finance namespace. What exact DNS name should they use with their curl command?

    Answer The developer should use `payment-api.finance` or the fully qualified domain name (FQDN) `payment-api.finance.svc.cluster.local`. Because the testing pod and the target Service are in different namespaces, simply curling `payment-api` will fail, as Kubernetes DNS resolves bare service names to the pod's *current* namespace by default. Appending the target namespace ensures the DNS resolver finds the correct Service.
  5. Scenario: Your team is migrating a legacy application to Kubernetes on AWS. The application needs to be accessible to external customers over the internet on standard port 80. You initially tried NodePort, but the security team rejected exposing ports in the 30000+ range. Which Service type is the correct architectural choice here?

    Answer You should use the `LoadBalancer` Service type. When you create a LoadBalancer Service in a supported cloud environment (like AWS, GCP, or Azure), Kubernetes automatically provisions a native cloud load balancer. This external load balancer routes traffic from standard ports (like 80 or 443) on a public IP address directly to your cluster, bypassing the need for clients to use high NodePorts and satisfying the security team's requirements.
  6. Scenario: You have created a Service named auth-svc and a Deployment of auth pods. You want to verify that Kubernetes has successfully linked the Service to the Pods before you test the application from another microservice. What kubectl command should you run to prove the Service has discovered the Pod IPs?

    Answer You should run `kubectl get endpoints auth-svc` (or `kubectl describe svc auth-svc`). The Endpoints object is automatically created and updated by Kubernetes to maintain a list of the actual IP addresses of the Pods that match the Service's label selector. If the endpoints list is empty, it immediately tells you there is a label mismatch or the pods are crashing, saving you time debugging the application code.
  7. Scenario: Your Node.js application listens on port 3000 inside its container. You want other pods in the cluster to reach it by calling http://node-backend:80. How do you configure the port and targetPort in the Service definition to make this happen?

    Answer You must set the Service's `port: 80` and `targetPort: 3000`. The `port` field defines the port that the Service itself exposes to clients (the virtual port that other pods will call). The `targetPort` defines the actual port where the container application is listening. The Service acts as an internal proxy, seamlessly translating traffic arriving on port 80 and forwarding it to the pod on port 3000.

Task: Create a deployment and expose it via Service.

Terminal window
# 1. Create deployment
kubectl create deployment web --image=nginx --replicas=3
# 2. Expose as ClusterIP
kubectl expose deployment web --port=80
# 3. Check service
kubectl get svc web
kubectl get endpoints web
# 4. Test from within cluster
kubectl run test --image=busybox --rm -it -- wget -qO- web
# 5. Create NodePort service
kubectl expose deployment web --port=80 --type=NodePort --name=web-external
# 6. Get NodePort
kubectl get svc web-external
# Note the port in 30000-32767 range
# 7. Cleanup
kubectl delete deployment web
kubectl delete svc web web-external

Success criteria:

  • The internal web Service is created and has a ClusterIP assigned.
  • kubectl get endpoints web shows three distinct pod IP addresses.
  • The wget command from the temporary pod successfully returns the Nginx welcome HTML.
  • The web-external Service is created with a TYPE of NodePort and a port in the 30000-32767 range.

Services provide stable networking:

Types:

  • ClusterIP - Internal only (default)
  • NodePort - External via node port
  • LoadBalancer - External via cloud LB

Key concepts:

  • Selectors match pod labels
  • DNS names for discovery
  • Port mapping (port → targetPort)
  • Endpoints show matched pods

Commands:

  • kubectl expose deployment NAME --port=PORT
  • kubectl get svc
  • kubectl get endpoints

Module 1.6: ConfigMaps and Secrets - Managing configuration.