Module 3.2: Container Logging
Complexity:
[QUICK]- Essential daily skill, simple commandsTime to Complete: 25-30 minutes
Prerequisites: Module 1.1 (Pods), basic understanding of stdout/stderr
Learning Outcomes
Section titled “Learning Outcomes”After completing this module, you will be able to:
- Debug application issues by querying container logs with
kubectl logsand its filtering options - Implement logging patterns that write structured output to stdout/stderr for Kubernetes collection
- Explain the Kubernetes logging architecture and why containers should log to stdout, not files
- Configure log access for multi-container pods by specifying the correct container name
Why This Module Matters
Section titled “Why This Module Matters”Logs are your window into what’s happening inside containers. When something goes wrong, logs are usually the first place you look. Kubernetes doesn’t store logs permanently—it provides access to stdout/stderr from running containers.
The CKAD exam tests your ability to:
- View logs from containers
- Access logs from previous container instances
- Handle multi-container pods
- Filter and search log output
The Flight Recorder Analogy
Container logs are like an airplane’s black box. They record everything the application says (stdout/stderr). When something goes wrong, you retrieve the recording to understand what happened. But unlike a black box, Kubernetes only keeps recent logs—if the “plane” is destroyed and rebuilt, the old recording is gone.
Basic Log Commands
Section titled “Basic Log Commands”The kubectl logs command is your primary tool for retrieving logs from running containers. Whether you need a quick snapshot of recent events or want to stream logs live as they happen, mastering these foundational commands will save you significant time during troubleshooting.
View Logs
Section titled “View Logs”# Basic logsk logs pod-name
# Follow logs (stream)k logs -f pod-name
# Last N linesk logs --tail=100 pod-name
# Logs since timestampk logs --since=1h pod-namek logs --since=30m pod-namek logs --since=10s pod-name
# Logs since specific timek logs --since-time=2024-01-15T10:00:00Z pod-name
# Show timestampsk logs --timestamps pod-nameMulti-Container Pods
Section titled “Multi-Container Pods”# Specify container (required for multi-container)k logs pod-name -c container-name
# All containersk logs pod-name --all-containers=true
# List containers in podk get pod pod-name -o jsonpath='{.spec.containers[*].name}'Pause and predict: You run
kubectl logs my-podand get no output, but the application is definitely running and processing requests. What is the most likely cause?
Previous Container Instance
Section titled “Previous Container Instance”# Logs from previous crashed/restarted containerk logs pod-name --previousk logs pod-name -p
# Previous instance of specific containerk logs pod-name -c container-name --previousLog Sources
Section titled “Log Sources”Understanding where Kubernetes looks for logs is critical. If your application isn’t configured correctly, kubectl logs will return nothing, leaving you blind during an outage. Here is how Kubernetes captures log data.
What Gets Logged
Section titled “What Gets Logged”Kubernetes captures:
- stdout: Standard output from container processes
- stderr: Standard error from container processes
Applications MUST log to stdout/stderr for kubectl logs to work.
What Doesn’t Get Logged
Section titled “What Doesn’t Get Logged”- Files written inside container (e.g.,
/var/log/app.log) - System logs from the node
- Logs from init containers (use
-c init-container-name)
Deployment and Label-Based Logs
Section titled “Deployment and Label-Based Logs”When you move from running single Pods to scaled applications using Deployments, debugging becomes more complex. If an application has five replicas, checking each Pod’s logs individually is inefficient and error-prone. Kubernetes solves this by allowing you to query logs across multiple Pods simultaneously using label selectors, giving you a unified view of your application’s behavior.
Logs from Deployment Pods
Section titled “Logs from Deployment Pods”# Logs from all pods with a labelk logs -l app=myapp
# Follow logs from all matching podsk logs -l app=myapp -f
# Limit to specific number of podsk logs -l app=myapp --max-log-requests=5
# With tailk logs -l app=myapp --tail=50Stop and think: A pod has two containers:
appandsidecar. You runkubectl logs my-podand get an error. Why? What do you need to add to the command?
Combining Filters
Section titled “Combining Filters”# Label + container + tailk logs -l app=myapp -c nginx --tail=100
# Label + sincek logs -l app=myapp --since=30mLog Patterns
Section titled “Log Patterns”Beyond basic retrieval, you will often need to combine kubectl logs with standard Linux tools or specific flags to isolate the information you need. These patterns help you filter noise and capture logs for offline analysis.
Streaming Logs for Debugging
Section titled “Streaming Logs for Debugging”# Stream with timestampsk logs -f --timestamps pod-name
# Stream only errors (grep)k logs -f pod-name | grep -i error
# Stream from multiple podsk logs -f -l app=myapp --all-containersPause and predict: A pod has been restarting due to CrashLoopBackOff. You need to see what the application printed before it crashed. What flag do you add to
kubectl logs?
Exporting Logs
Section titled “Exporting Logs”# Save to filek logs pod-name > pod-logs.txt
# Save with timestampsk logs --timestamps pod-name > pod-logs-$(date +%s).txt
# All containersk logs pod-name --all-containers > all-logs.txtStructured Logging for Aggregation
Section titled “Structured Logging for Aggregation”While human-readable logs are great for manual debugging with kubectl logs, production environments typically use log aggregators (like Fluentd, Fluent Bit, or Loki) to collect and search logs across the entire cluster.
To make logs easily queryable by these tools, applications should implement structured logging by writing JSON formatted output directly to standard output (stdout). This allows log aggregators to parse specific fields—like severity levels, exact timestamps, and request IDs—automatically, without requiring complex regex parsing rules.
Unstructured Log (Hard to parse):
2024-03-10 14:22:01 ERROR Connection failed to db-svc:5432 user=adminStructured Log (Easy for Fluentd/Loki to index):
{"timestamp":"2024-03-10T14:22:01Z","level":"error","message":"Connection failed","service":"db-svc","port":5432,"user":"admin"}Multi-Container Log Scenarios
Section titled “Multi-Container Log Scenarios”Modern Kubernetes deployments frequently use the sidecar pattern or initialization containers. When a Pod contains more than one container, Kubernetes needs to know exactly which log stream you want to read.
Sidecar Pattern
Section titled “Sidecar Pattern”apiVersion: v1kind: Podmetadata: name: sidecar-demospec: containers: - name: app image: nginx - name: sidecar image: busybox command: ['sh', '-c', 'while true; do echo sidecar running; sleep 10; done']# View main app logsk logs sidecar-demo -c app
# View sidecar logsk logs sidecar-demo -c sidecar
# All containersk logs sidecar-demo --all-containersInit Container Logs
Section titled “Init Container Logs”apiVersion: v1kind: Podmetadata: name: init-demospec: initContainers: - name: init-setup image: busybox command: ['sh', '-c', 'echo Init complete'] containers: - name: app image: nginx# View init container logsk logs init-demo -c init-setupLog Visualization
Section titled “Log Visualization”To tie it all together, here is a high-level visualization of how log data flows from your application’s standard output, through the container runtime, and ultimately to your terminal via kubectl logs.
┌─────────────────────────────────────────────────────────────┐│ Container Logging │├─────────────────────────────────────────────────────────────┤│ ││ Application ││ │ ││ ▼ ││ stdout/stderr ─────────────▶ Container Runtime ││ │ ││ ▼ ││ /var/log/containers/ ││ /var/log/pods/ ││ │ ││ ▼ ││ kubectl logs ││ ││ ┌─────────────────────────────────────────────────────┐ ││ │ Pod: my-pod │ ││ │ ┌──────────────┐ ┌──────────────┐ │ ││ │ │ Container A │ │ Container B │ │ ││ │ │ (stdout) │ │ (stdout) │ │ ││ │ │ (stderr) │ │ (stderr) │ │ ││ │ └──────────────┘ └──────────────┘ │ ││ │ │ │ │ ││ │ ▼ ▼ │ ││ │ k logs -c a k logs -c b │ ││ └─────────────────────────────────────────────────────┘ ││ │└─────────────────────────────────────────────────────────────┘Quick Reference
Section titled “Quick Reference”# Essential commandsk logs POD # Basic logsk logs POD -f # Follow/streamk logs POD --tail=100 # Last 100 linesk logs POD --since=1h # Last hourk logs POD -c CONTAINER # Specific containerk logs POD --previous # Previous instancek logs POD --all-containers # All containersk logs -l app=myapp # By labelk logs POD --timestamps # With timestampsDid You Know?
Section titled “Did You Know?”-
Logs are stored on the node at
/var/log/containers/and/var/log/pods/. When a pod is deleted, these logs are eventually cleaned up. -
There’s no built-in log aggregation in Kubernetes. For production, teams use tools like Fluentd, Fluent Bit, Loki, or Elasticsearch to collect and store logs centrally.
-
Log rotation is handled by the container runtime. By default, Docker/containerd rotates logs to prevent disk overflow, but this means old logs disappear.
Common Mistakes
Section titled “Common Mistakes”| Mistake | Why It Hurts | Solution |
|---|---|---|
Forgetting -c for multi-container | Error: must specify container | List containers first, then specify |
| Looking for logs from deleted pods | Logs are gone | Use --previous before pod restarts |
| App logging to files, not stdout | kubectl logs shows nothing | Configure app to log to stdout |
Not using --tail for large logs | Terminal floods with data | Always limit initial output |
| Ignoring init container logs | Miss setup errors | Check init containers with -c |
-
A pod named
payment-servicehas been crashing and restarting. You need to find out what error caused the last crash, butkubectl logs payment-serviceonly shows the current (freshly started) instance’s logs. How do you retrieve the crash logs?Answer
Use `kubectl logs payment-service --previous` (or `-p`). This retrieves logs from the previous container instance before the restart. Kubernetes keeps one previous set of logs per container. If the pod has restarted multiple times, you only get the immediately prior instance's logs — earlier crash logs are lost. This is why log aggregation tools (Fluentd, Loki) are essential in production. -
A developer reports that
kubectl logs my-podreturns an error: “a container name must be specified.” The pod is Running and has no restarts. What is the issue and how do they fix it?Answer
The pod has multiple containers (likely a sidecar pattern), and `kubectl logs` requires you to specify which container when there's more than one. Fix by adding `-c container-name` to the command. To find available container names, run `kubectl get pod my-pod -o jsonpath='{.spec.containers[*].name}'`. Alternatively, use `--all-containers=true` to see logs from every container in the pod. -
You’re debugging a production issue and need to see only the last 30 minutes of logs from all pods in a deployment with label
app=checkout. The deployment has 8 replicas. What command do you use, and what pitfall should you watch out for?Answer
Use `kubectl logs -l app=checkout --since=30m --tail=100`. The pitfall is that by default `kubectl logs` with a label selector only follows up to 5 pods. If you have 8 replicas, you'll miss 3 pods worth of logs. Add `--max-log-requests=10` to increase the limit. Also consider adding `--timestamps` to correlate log entries across pods when debugging timing-sensitive issues. -
An application writes its logs to
/var/log/app.loginside the container instead of stdout. When you runkubectl logs, you see nothing. The application is confirmed to be running and writing logs. What is wrong and what are two ways to fix it?Answer
`kubectl logs` only captures stdout and stderr output. Logs written to files inside the container are invisible to Kubernetes. Two fixes: (1) Reconfigure the application to log to stdout/stderr instead of files — this is the recommended Kubernetes pattern. (2) If you can't change the app, add a sidecar container that tails the log file and streams it to its own stdout (e.g., `command: ['sh', '-c', 'tail -F /var/log/app.log']` with a shared volume). Then use `kubectl logs pod -c sidecar` to access the logs.
Hands-On Exercise
Section titled “Hands-On Exercise”Task: Practice log retrieval from various pod configurations.
Setup:
# Create a pod that generates logscat << 'EOF' | k apply -f -apiVersion: v1kind: Podmetadata: name: log-demo labels: app: log-demospec: containers: - name: logger image: busybox command: ['sh', '-c', 'i=0; while true; do echo "$(date) - Log entry $i"; i=$((i+1)); sleep 2; done']EOFPart 1: Basic Logs
# View logsk logs log-demo
# Follow logs (Ctrl+C to stop)k logs log-demo -f
# Last 5 linesk logs log-demo --tail=5
# With timestampsk logs log-demo --timestamps --tail=5Part 2: Multi-Container
# Create multi-container podcat << 'EOF' | k apply -f -apiVersion: v1kind: Podmetadata: name: multi-logspec: containers: - name: app image: nginx - name: sidecar image: busybox command: ['sh', '-c', 'while true; do echo Sidecar log; sleep 5; done']EOF
# List containersk get pod multi-log -o jsonpath='{.spec.containers[*].name}'
# View each containerk logs multi-log -c appk logs multi-log -c sidecar
# All containersk logs multi-log --all-containersPart 3: Previous Instance
# Create pod that crashescat << 'EOF' | k apply -f -apiVersion: v1kind: Podmetadata: name: crasherspec: containers: - name: app image: busybox command: ['sh', '-c', 'echo "Starting..."; echo "About to crash!"; exit 1']EOF
# Wait for restart, then check previous logsk get pod crasher -wk logs crasher --previousCleanup:
k delete pod log-demo multi-log crasherPractice Drills
Section titled “Practice Drills”Drill 1: Basic Logs (Target: 1 minute)
Section titled “Drill 1: Basic Logs (Target: 1 minute)”# Create podk run drill1 --image=nginx
# View logsk logs drill1
# Cleanupk delete pod drill1Drill 2: Follow Logs (Target: 2 minutes)
Section titled “Drill 2: Follow Logs (Target: 2 minutes)”# Create logging podk run drill2 --image=busybox -- sh -c 'while true; do echo tick; sleep 1; done'
# Follow (Ctrl+C after a few ticks)k logs drill2 -f
# Cleanupk delete pod drill2Drill 3: Multi-Container (Target: 3 minutes)
Section titled “Drill 3: Multi-Container (Target: 3 minutes)”cat << 'EOF' | k apply -f -apiVersion: v1kind: Podmetadata: name: drill3spec: containers: - name: web image: nginx - name: monitor image: busybox command: ['sh', '-c', 'while true; do echo monitoring; sleep 5; done']EOF
# Get logs from eachk logs drill3 -c webk logs drill3 -c monitor
# Cleanupk delete pod drill3Drill 4: Label Selection (Target: 2 minutes)
Section titled “Drill 4: Label Selection (Target: 2 minutes)”# Create multiple podsk run drill4a --image=nginx -l app=drill4k run drill4b --image=nginx -l app=drill4
# Logs from all with labelk logs -l app=drill4
# Cleanupk delete pod -l app=drill4Drill 5: Previous Instance (Target: 3 minutes)
Section titled “Drill 5: Previous Instance (Target: 3 minutes)”# Create crashing podcat << 'EOF' | k apply -f -apiVersion: v1kind: Podmetadata: name: drill5spec: containers: - name: app image: busybox command: ['sh', '-c', 'echo "Run at $(date)"; sleep 5; exit 1']EOF
# Watch it crashk get pod drill5 -w
# After restart, get previous logsk logs drill5 --previous
# Cleanupk delete pod drill5Drill 6: Complete Logging Scenario (Target: 4 minutes)
Section titled “Drill 6: Complete Logging Scenario (Target: 4 minutes)”Scenario: Debug a failing application using logs.
# Create "broken" deploymentcat << 'EOF' | k apply -f -apiVersion: apps/v1kind: Deploymentmetadata: name: drill6spec: replicas: 2 selector: matchLabels: app: drill6 template: metadata: labels: app: drill6 spec: containers: - name: app image: busybox command: ['sh', '-c', 'echo "Starting app"; echo "ERROR: Database connection failed"; exit 1']EOF
# Find podsk get pods -l app=drill6
# Check logs from one podk logs -l app=drill6 --tail=10
# Get previous instance logsPOD=$(k get pods -l app=drill6 -o jsonpath='{.items[0].metadata.name}')k logs $POD --previous
# Cleanupk delete deploy drill6Next Module
Section titled “Next Module”Module 3.3: Debugging in Kubernetes - Troubleshoot pods, containers, and cluster issues.