Module 3.3: DNS & CoreDNS
Complexity:
[MEDIUM]- Critical infrastructure componentTime to Complete: 40-50 minutes
Prerequisites: Module 3.1 (Services), Module 3.2 (Endpoints)
What You’ll Be Able to Do
Section titled “What You’ll Be Able to Do”After this module, you will be able to:
- Resolve service names to IPs using Kubernetes DNS conventions (service.namespace.svc.cluster.local)
- Debug DNS failures by checking CoreDNS pods, configmap, and testing resolution from pods
- Configure custom DNS entries and upstream DNS forwarding in CoreDNS
- Explain how DNS-based service discovery enables microservice communication
Why This Module Matters
Section titled “Why This Module Matters”DNS is how pods find services. Every time a pod makes a request to my-service, DNS resolves that name to an IP address. If DNS breaks, your entire cluster’s service discovery breaks. Understanding CoreDNS is essential for troubleshooting connectivity issues.
The CKA exam tests DNS debugging, CoreDNS configuration, and understanding how Kubernetes names resolve. You’ll need to troubleshoot DNS issues and understand the resolution hierarchy.
The Phone Book Analogy
DNS is your cluster’s phone book. Instead of remembering that the “web-service” lives at IP 10.96.45.123, you just dial “web-service” and DNS looks up the number for you. CoreDNS is the phone operator who maintains this phone book and answers lookups.
What You’ll Learn
Section titled “What You’ll Learn”By the end of this module, you’ll be able to:
- Understand how Kubernetes DNS works
- Troubleshoot DNS resolution issues
- Configure CoreDNS
- Use different DNS name formats
- Debug pods with DNS problems
Did You Know?
Section titled “Did You Know?”-
CoreDNS replaced kube-dns: Before Kubernetes 1.11, kube-dns handled DNS. CoreDNS is faster, more flexible, and uses plugins for extensibility.
-
DNS is the #1 troubleshooting target: Most “network issues” are actually DNS issues. When in doubt, check DNS first!
-
Pods get DNS configured automatically: The kubelet injects
/etc/resolv.confinto every pod, pointing to the cluster DNS service.
Part 1: DNS Architecture
Section titled “Part 1: DNS Architecture”1.1 How Kubernetes DNS Works
Section titled “1.1 How Kubernetes DNS Works”┌────────────────────────────────────────────────────────────────┐│ Kubernetes DNS Architecture ││ ││ ┌────────────────┐ ││ │ Pod │ ││ │ │ ││ │ curl web-svc │ ││ │ │ │ ││ │ ▼ │ ││ │ /etc/resolv.conf ││ │ nameserver 10.96.0.10 ──────────────────────┐ ││ │ search default.svc... │ ││ └────────────────┘ │ ││ │ ││ ▼ ││ ┌──────────────────────────────────────────────────────────┐││ │ CoreDNS Service (10.96.0.10) │││ │ │││ │ ┌─────────┐ ┌─────────┐ │││ │ │CoreDNS │ │CoreDNS │ (2 replicas by default) │││ │ │ Pod │ │ Pod │ │││ │ └────┬────┘ └────┬────┘ │││ │ │ │ │││ │ └─────┬─────┘ │││ │ ▼ │││ │ Query: web-svc.default.svc.cluster.local │││ │ │ │││ │ ▼ │││ │ Response: 10.96.45.123 (Service ClusterIP) │││ └──────────────────────────────────────────────────────────┘││ │└────────────────────────────────────────────────────────────────┘1.2 CoreDNS Components
Section titled “1.2 CoreDNS Components”| Component | Location | Purpose |
|---|---|---|
| CoreDNS Deployment | kube-system namespace | Runs CoreDNS pods |
| CoreDNS Service | kube-system namespace | Stable IP for DNS queries (usually 10.96.0.10) |
| Corefile ConfigMap | kube-system namespace | CoreDNS configuration |
| Pod /etc/resolv.conf | Every pod | Points to CoreDNS service |
1.3 Pod DNS Configuration
Section titled “1.3 Pod DNS Configuration”Every pod gets this automatically:
# Inside any podcat /etc/resolv.conf
# Output:nameserver 10.96.0.10 # CoreDNS service IPsearch default.svc.cluster.local svc.cluster.local cluster.localoptions ndots:5| Field | Purpose |
|---|---|
nameserver | IP of CoreDNS service |
search | Domains to append when resolving short names |
ndots:5 | If name has <5 dots, try search domains first |
Part 2: DNS Name Formats
Section titled “Part 2: DNS Name Formats”2.1 Service DNS Names
Section titled “2.1 Service DNS Names”┌────────────────────────────────────────────────────────────────┐│ Service DNS Naming ││ ││ Full format (FQDN): ││ <service>.<namespace>.svc.<cluster-domain> ││ ││ Example: web-svc.production.svc.cluster.local ││ ─────── ────────── ─── ───────────── ││ │ │ │ │ ││ service namespace fixed cluster domain ││ suffix (default) ││ │└────────────────────────────────────────────────────────────────┘2.2 Shorthand Names (Search Domains)
Section titled “2.2 Shorthand Names (Search Domains)”# From pod in "default" namespace, reaching "web-svc" in "default":curl web-svc # ✓ Works (same namespace)curl web-svc.default # ✓ Workscurl web-svc.default.svc # ✓ Workscurl web-svc.default.svc.cluster.local # ✓ Works (FQDN)
# From pod in "default" namespace, reaching "api" in "production":curl api # ✗ Fails (wrong namespace)curl api.production # ✓ Works (cross-namespace)curl api.production.svc.cluster.local # ✓ Works (FQDN)Pause and predict: A pod in namespace
stagingrunscurl api-service. The cluster has anapi-servicein bothstagingandproductionnamespaces. Which one does the pod reach, and why?
2.3 How Search Domains Work
Section titled “2.3 How Search Domains Work”┌────────────────────────────────────────────────────────────────┐│ Search Domain Resolution ││ ││ Pod in namespace "default" resolves "web-svc": ││ ││ search default.svc.cluster.local svc.cluster.local ... ││ ││ Step 1: Try web-svc.default.svc.cluster.local ││ └── Found! Returns IP ││ ││ If not found: ││ Step 2: Try web-svc.svc.cluster.local ││ Step 3: Try web-svc.cluster.local ││ Step 4: Try web-svc (external DNS) ││ │└────────────────────────────────────────────────────────────────┘2.4 Pod DNS Names
Section titled “2.4 Pod DNS Names”Pods also get DNS names:
┌────────────────────────────────────────────────────────────────┐│ Pod DNS Names ││ ││ Pod IP: 10.244.1.5 ││ DNS: 10-244-1-5.default.pod.cluster.local ││ ────────── ─────── ─── ───────────── ││ IP with namespace pod cluster domain ││ dashes ││ ││ For StatefulSet pods with headless service: ││ DNS: web-0.web-svc.default.svc.cluster.local ││ ───── ─────── ─────── ─── ││ pod headless namespace ││ name service ││ │└────────────────────────────────────────────────────────────────┘Part 3: CoreDNS Configuration
Section titled “Part 3: CoreDNS Configuration”3.1 Viewing CoreDNS Components
Section titled “3.1 Viewing CoreDNS Components”# Check CoreDNS podsk get pods -n kube-system -l k8s-app=kube-dns
# Check CoreDNS deploymentk get deployment coredns -n kube-system
# Check CoreDNS servicek get svc kube-dns -n kube-system# Note: Service is named "kube-dns" for compatibility
# View CoreDNS configurationk get configmap coredns -n kube-system -o yaml3.2 Understanding the Corefile
Section titled “3.2 Understanding the Corefile”# CoreDNS ConfigMapapiVersion: v1kind: ConfigMapmetadata: name: coredns namespace: kube-systemdata: Corefile: | .:53 { errors # Log errors health { # Health check endpoint lameduck 5s } ready # Readiness endpoint kubernetes cluster.local in-addr.arpa ip6.arpa { # K8s plugin pods insecure # Pod DNS resolution fallthrough in-addr.arpa ip6.arpa ttl 30 # Cache TTL } prometheus :9153 # Metrics forward . /etc/resolv.conf { # External DNS forwarding max_concurrent 1000 } cache 30 # Response caching loop # Detect loops reload # Auto-reload config loadbalance # Round-robin DNS }3.3 Key Corefile Plugins
Section titled “3.3 Key Corefile Plugins”| Plugin | Purpose |
|---|---|
kubernetes | Resolves Kubernetes service/pod names |
forward | Forwards external queries to upstream DNS |
cache | Caches responses to reduce load |
errors | Logs DNS errors |
health | Provides health check endpoint |
prometheus | Exposes metrics |
loop | Detects and breaks DNS loops |
3.4 Customizing CoreDNS
Section titled “3.4 Customizing CoreDNS”# Add custom DNS entriesapiVersion: v1kind: ConfigMapmetadata: name: coredns namespace: kube-systemdata: Corefile: | .:53 { # ... existing config ...
# Add custom hosts hosts { 10.0.0.1 custom.example.com fallthrough }
# Forward specific domain to custom DNS forward example.com 10.0.0.53 }# After editing, restart CoreDNSk rollout restart deployment coredns -n kube-systemStop and think: A pod reports “connection timed out” when calling another service by name. Is this necessarily a DNS problem? What steps would you take to determine whether DNS or the network is at fault?
Part 4: DNS Debugging
Section titled “Part 4: DNS Debugging”4.1 DNS Debugging Workflow
Section titled “4.1 DNS Debugging Workflow”DNS Issue? │ ├── Step 1: Test from inside a pod │ k run test --rm -it --image=busybox:1.36 -- nslookup <service> │ │ │ ├── Works? → DNS is fine, issue is elsewhere │ │ │ └── Fails? → Continue debugging │ ├── Step 2: Check CoreDNS is running │ k get pods -n kube-system -l k8s-app=kube-dns │ │ │ └── Not running? → Fix CoreDNS deployment │ ├── Step 3: Check CoreDNS logs │ k logs -n kube-system -l k8s-app=kube-dns │ │ │ └── Errors? → Check Corefile config │ ├── Step 4: Check pod resolv.conf │ k exec <pod> -- cat /etc/resolv.conf │ │ │ └── Wrong nameserver? → Check kubelet config │ └── Step 5: Test external DNS k run test --rm -it --image=busybox:1.36 -- nslookup google.com │ └── Fails? → Check forward config in Corefile4.2 Common DNS Commands
Section titled “4.2 Common DNS Commands”# Test DNS from inside clusterk run dns-test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup kubernetes
# Test specific servicek run dns-test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup web-svc.default.svc.cluster.local
# Test with specific DNS serverk run dns-test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup web-svc 10.96.0.10
# Check resolv.confk exec <pod> -- cat /etc/resolv.conf
# Check CoreDNS logsk logs -n kube-system -l k8s-app=kube-dns --tail=50
# Verify CoreDNS is respondingk run dns-test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup kubernetes.default.svc.cluster.local4.3 DNS Debug Pod
Section titled “4.3 DNS Debug Pod”Use a debug pod with more tools:
# Create a debug podk run dns-debug --image=nicolaka/netshoot --restart=Never -- sleep 3600
# Use it for debuggingk exec -it dns-debug -- dig web-svc.default.svc.cluster.localk exec -it dns-debug -- host web-svck exec -it dns-debug -- nslookup web-svc
# Cleanupk delete pod dns-debug4.4 Common DNS Issues
Section titled “4.4 Common DNS Issues”| Symptom | Cause | Solution |
|---|---|---|
NXDOMAIN | Service doesn’t exist | Check service name/namespace |
Server failure | CoreDNS down | Check CoreDNS pods |
| Timeout | Network issue to CoreDNS | Check pod network, CNI |
| Wrong IP returned | Stale cache | Restart CoreDNS, check cache TTL |
| External domains fail | Forward config wrong | Check Corefile forward directive |
What would happen if: You set
dnsPolicy: Defaulton a pod running in your cluster. The pod tries to resolvemy-service.default.svc.cluster.local. Does it succeed? Why or why not?
Part 5: DNS Policies
Section titled “Part 5: DNS Policies”5.1 Pod DNS Policies
Section titled “5.1 Pod DNS Policies”apiVersion: v1kind: Podmetadata: name: dns-policy-demospec: dnsPolicy: ClusterFirst # Default containers: - name: app image: nginx| Policy | Behavior |
|---|---|
ClusterFirst (default) | Use cluster DNS, fall back to node DNS |
Default | Use node’s DNS settings (inherit from host) |
ClusterFirstWithHostNet | Use cluster DNS even with hostNetwork: true |
None | No DNS config, must specify dnsConfig |
5.2 Custom DNS Configuration
Section titled “5.2 Custom DNS Configuration”apiVersion: v1kind: Podmetadata: name: custom-dnsspec: dnsPolicy: "None" # Required for custom config dnsConfig: nameservers: - 1.1.1.1 # Custom DNS server - 8.8.8.8 searches: - custom.local # Custom search domain - svc.cluster.local options: - name: ndots value: "2" # Custom ndots containers: - name: app image: nginx5.3 Using hostNetwork with DNS
Section titled “5.3 Using hostNetwork with DNS”# Pod using host network but still using cluster DNSapiVersion: v1kind: Podmetadata: name: host-network-podspec: hostNetwork: true dnsPolicy: ClusterFirstWithHostNet # Important! containers: - name: app image: nginxPart 6: SRV Records
Section titled “Part 6: SRV Records”6.1 What Are SRV Records?
Section titled “6.1 What Are SRV Records?”SRV records include port information along with IP:
# Query SRV record for servicedig SRV web-svc.default.svc.cluster.local
# Returns:# _http._tcp.web-svc.default.svc.cluster.local. 30 IN SRV 0 100 80 web-svc.default.svc.cluster.local.6.2 Named Ports and SRV Records
Section titled “6.2 Named Ports and SRV Records”# Service with named portapiVersion: v1kind: Servicemetadata: name: web-svcspec: selector: app: web ports: - name: http # Named port port: 80 targetPort: 8080# SRV record format: _<port-name>._<protocol>.<service>.<namespace>.svc.cluster.local# Query:dig SRV _http._tcp.web-svc.default.svc.cluster.localCommon Mistakes
Section titled “Common Mistakes”| Mistake | Problem | Solution |
|---|---|---|
| Using wrong namespace | NXDOMAIN error | Use FQDN or check namespace |
Forgetting .svc | Resolution fails | Use service.namespace or FQDN |
| CoreDNS not running | All DNS fails | Check kube-system pods |
| Wrong dnsPolicy | Pod can’t resolve | Use ClusterFirst for cluster services |
| Editing wrong ConfigMap | Config not applied | Edit coredns ConfigMap in kube-system |
-
After a cluster upgrade, all pods start failing with “could not resolve host” errors. You check and CoreDNS pods are running. What would you investigate next, and what commands would you use?
Answer
Running does not mean healthy. First, verify CoreDNS is actually responding: `k run test --rm -it --image=busybox:1.36 --restart=Never -- nslookup kubernetes.default`. If that fails, check CoreDNS logs for errors: `k logs -n kube-system -l k8s-app=kube-dns --tail=50`. Then verify the CoreDNS Service has endpoints: `k get endpoints kube-dns -n kube-system`. Also check if a pod's `/etc/resolv.conf` still points to the correct nameserver IP. The upgrade might have changed the CoreDNS ClusterIP or corrupted the Corefile ConfigMap. -
A pod in namespace
team-acallscurl dband accidentally reaches a database in its own namespace instead of the one in namespaceshared. The developer expected to reach the shared database. Explain what happened and how to prevent this.Answer
The search domain in `/etc/resolv.conf` appends the pod's own namespace first, so `db` resolves to `db.team-a.svc.cluster.local`. Since a service named `db` exists in `team-a`, it matches before ever trying other namespaces. To reach the shared database, the developer must use `db.shared` or the full FQDN `db.shared.svc.cluster.local`. To prevent this, establish a naming convention where team-local services have prefixed names (e.g., `team-a-db`) and shared services use explicit cross-namespace references in application config. -
You need to add a custom DNS entry so that
legacy-api.internalresolves to10.0.5.100for all pods in the cluster. Where do you make this change and what is the risk?Answer
Edit the `coredns` ConfigMap in the `kube-system` namespace. Add a `hosts` block inside the Corefile: `hosts { 10.0.5.100 legacy-api.internal \n fallthrough }`. Then restart CoreDNS with `k rollout restart deployment coredns -n kube-system`. The risk is that editing the CoreDNS ConfigMap affects all DNS resolution cluster-wide. A syntax error in the Corefile will break ALL DNS, taking down service discovery for every pod. Always validate the config and have a rollback plan. Also note that `fallthrough` is essential -- without it, the hosts plugin will stop processing and other DNS queries will fail. -
A developer complains that API calls to
api.external-partner.comfrom their pod take 2 seconds, but only 50ms from their laptop. Both are on the same network. What is happening and how do you fix it?Answer
The `ndots:5` default in Kubernetes resolv.conf means `api.external-partner.com` (only 2 dots) is treated as a relative name. Before the actual query succeeds, the resolver tries: `api.external-partner.com.default.svc.cluster.local`, then `.svc.cluster.local`, then `.cluster.local` -- each returning NXDOMAIN after a timeout. This adds ~1.5 seconds of wasted DNS lookups. Fix options: set `dnsConfig.options.ndots: 2` in the pod spec, use a trailing dot in the URL (`api.external-partner.com.`), or configure the app to use the FQDN with trailing dot. -
You have a pod with
hostNetwork: truethat cannot resolve cluster service names. It can resolve external domains likegoogle.comfine. What is the cause and fix?Answer
When `hostNetwork: true` is set, the pod uses the node's network namespace, including its `/etc/resolv.conf`. The node's resolv.conf points to the node's DNS server (not CoreDNS), which knows nothing about cluster service names like `my-svc.default.svc.cluster.local`. External domains work because the node's DNS can resolve them. The fix is to set `dnsPolicy: ClusterFirstWithHostNet`, which tells the kubelet to inject the CoreDNS address into the pod's resolv.conf even though it uses the host network.
Hands-On Exercise
Section titled “Hands-On Exercise”Task: Debug and understand DNS in Kubernetes.
Steps:
- Check CoreDNS is running:
k get pods -n kube-system -l k8s-app=kube-dnsk get svc -n kube-system kube-dns- View CoreDNS configuration:
k get configmap coredns -n kube-system -o yaml- Create test service:
k create deployment web --image=nginxk expose deployment web --port=80- Test DNS resolution:
# Short namek run test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup web
# With namespacek run test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup web.default
# FQDNk run test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup web.default.svc.cluster.local- Check pod resolv.conf:
k run test --rm -it --image=busybox:1.36 --restart=Never -- \ cat /etc/resolv.conf- Test cross-namespace DNS:
# Create service in another namespacek create namespace otherk create deployment db -n other --image=nginxk expose deployment db -n other --port=80
# Resolve from default namespacek run test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup db.other- Test external DNS:
k run test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup google.com- Check CoreDNS logs:
k logs -n kube-system -l k8s-app=kube-dns --tail=20- Cleanup:
k delete deployment webk delete svc webk delete namespace otherSuccess Criteria:
- Can verify CoreDNS is running
- Understand DNS name formats
- Can resolve services by short name and FQDN
- Can resolve cross-namespace services
- Can troubleshoot DNS issues
Practice Drills
Section titled “Practice Drills”Drill 1: DNS Basics (Target: 3 minutes)
Section titled “Drill 1: DNS Basics (Target: 3 minutes)”# Create a servicek create deployment dns-test --image=nginxk expose deployment dns-test --port=80
# Test all name formatsk run test --rm -it --image=busybox:1.36 --restart=Never -- \ sh -c 'nslookup dns-test && nslookup dns-test.default && nslookup dns-test.default.svc.cluster.local'
# Cleanupk delete deployment dns-testk delete svc dns-testDrill 2: Check CoreDNS Health (Target: 2 minutes)
Section titled “Drill 2: Check CoreDNS Health (Target: 2 minutes)”# Check podsk get pods -n kube-system -l k8s-app=kube-dns -o wide
# Check servicek get svc kube-dns -n kube-system
# Check deploymentk get deployment coredns -n kube-system
# View logsk logs -n kube-system -l k8s-app=kube-dns --tail=10Drill 3: Cross-Namespace Resolution (Target: 3 minutes)
Section titled “Drill 3: Cross-Namespace Resolution (Target: 3 minutes)”# Create services in two namespacesk create namespace ns1k create namespace ns2k create deployment app1 -n ns1 --image=nginxk create deployment app2 -n ns2 --image=nginxk expose deployment app1 -n ns1 --port=80k expose deployment app2 -n ns2 --port=80
# From ns1, reach ns2 (and vice versa)k run test -n ns1 --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup app2.ns2
k run test -n ns2 --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup app1.ns1
# Cleanupk delete namespace ns1 ns2Drill 4: Inspect Pod DNS Config (Target: 2 minutes)
Section titled “Drill 4: Inspect Pod DNS Config (Target: 2 minutes)”# Create a podk run dns-check --image=busybox:1.36 --command -- sleep 3600
# Check its DNS configk exec dns-check -- cat /etc/resolv.conf
# Verify the nameserver matches kube-dns servicek get svc kube-dns -n kube-system -o jsonpath='{.spec.clusterIP}'
# Cleanupk delete pod dns-checkDrill 5: CoreDNS ConfigMap (Target: 3 minutes)
Section titled “Drill 5: CoreDNS ConfigMap (Target: 3 minutes)”# View the Corefilek get configmap coredns -n kube-system -o jsonpath='{.data.Corefile}'
# Describe the configmapk describe configmap coredns -n kube-system
# Check what plugins are enabledk get configmap coredns -n kube-system -o yaml | grep -E "kubernetes|forward|cache"Drill 6: Headless Service DNS (Target: 4 minutes)
Section titled “Drill 6: Headless Service DNS (Target: 4 minutes)”# Create deploymentk create deployment headless-test --image=nginx --replicas=3
# Create headless servicecat << 'EOF' | k apply -f -apiVersion: v1kind: Servicemetadata: name: headless-svcspec: clusterIP: None selector: app: headless-test ports: - port: 80EOF
# Regular service returns single IP# Headless returns all pod IPsk run test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup headless-svc# Should return multiple IPs
# Cleanupk delete deployment headless-testk delete svc headless-svcDrill 7: Custom DNS Policy (Target: 4 minutes)
Section titled “Drill 7: Custom DNS Policy (Target: 4 minutes)”# Create pod with custom DNScat << 'EOF' | k apply -f -apiVersion: v1kind: Podmetadata: name: custom-dns-podspec: dnsPolicy: None dnsConfig: nameservers: - 8.8.8.8 searches: - custom.local options: - name: ndots value: "2" containers: - name: app image: busybox:1.36 command: ["sleep", "3600"]EOF
# Check the custom resolv.confk exec custom-dns-pod -- cat /etc/resolv.conf# Should show 8.8.8.8 and custom.local
# Note: won't resolve cluster services!k exec custom-dns-pod -- nslookup kubernetes# Will fail
# Cleanupk delete pod custom-dns-podDrill 8: Debug DNS Failure (Target: 4 minutes)
Section titled “Drill 8: Debug DNS Failure (Target: 4 minutes)”# Create servicek create deployment web --image=nginxk expose deployment web --port=80
# Simulate debugging workflow# Step 1: Test from podk run test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup web# Should work
# Step 2: Test FQDNk run test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup web.default.svc.cluster.local# Should work
# Step 3: Check CoreDNSk get pods -n kube-system -l k8s-app=kube-dns
# Step 4: Check logsk logs -n kube-system -l k8s-app=kube-dns --tail=5
# Cleanupk delete deployment webk delete svc webDrill 9: Challenge - Complete DNS Workflow
Section titled “Drill 9: Challenge - Complete DNS Workflow”Without looking at solutions:
- Verify CoreDNS is running
- Create deployment
challengewith nginx - Expose it as a service
- Test DNS resolution with short name, namespace, and FQDN
- Create the same service in a new namespace
test - Resolve across namespaces
- View the CoreDNS logs
- Cleanup everything
# YOUR TASK: Complete in under 5 minutesSolution
# 1. Verify CoreDNSk get pods -n kube-system -l k8s-app=kube-dns
# 2. Create deploymentk create deployment challenge --image=nginx
# 3. Exposek expose deployment challenge --port=80
# 4. Test DNS formatsk run test --rm -it --image=busybox:1.36 --restart=Never -- \ sh -c 'nslookup challenge; nslookup challenge.default; nslookup challenge.default.svc.cluster.local'
# 5. Create in new namespacek create namespace testk create deployment challenge -n test --image=nginxk expose deployment challenge -n test --port=80
# 6. Cross-namespace resolutionk run test --rm -it --image=busybox:1.36 --restart=Never -- \ nslookup challenge.test
# 7. View logsk logs -n kube-system -l k8s-app=kube-dns --tail=10
# 8. Cleanupk delete deployment challengek delete svc challengek delete namespace testNext Module
Section titled “Next Module”Module 3.4: Ingress - HTTP routing and external access to services.