Skip to content

Module 3.2: Container Logging

Hands-On Lab Available
K8s Cluster intermediate 30 min
Launch Lab ↗

Opens in Killercoda in a new tab

Complexity: [QUICK] - Essential daily skill, simple commands

Time to Complete: 25-30 minutes

Prerequisites: Module 1.1 (Pods), basic understanding of stdout/stderr


After completing this module, you will be able to:

  • Debug application issues by querying container logs with kubectl logs and its filtering options
  • Implement logging patterns that write structured output to stdout/stderr for Kubernetes collection
  • Explain the Kubernetes logging architecture and why containers should log to stdout, not files
  • Configure log access for multi-container pods by specifying the correct container name

Logs are your window into what’s happening inside containers. When something goes wrong, logs are usually the first place you look. Kubernetes doesn’t store logs permanently—it provides access to stdout/stderr from running containers.

The CKAD exam tests your ability to:

  • View logs from containers
  • Access logs from previous container instances
  • Handle multi-container pods
  • Filter and search log output

The Flight Recorder Analogy

Container logs are like an airplane’s black box. They record everything the application says (stdout/stderr). When something goes wrong, you retrieve the recording to understand what happened. But unlike a black box, Kubernetes only keeps recent logs—if the “plane” is destroyed and rebuilt, the old recording is gone.


The kubectl logs command is your primary tool for retrieving logs from running containers. Whether you need a quick snapshot of recent events or want to stream logs live as they happen, mastering these foundational commands will save you significant time during troubleshooting.

Terminal window
# Basic logs
k logs pod-name
# Follow logs (stream)
k logs -f pod-name
# Last N lines
k logs --tail=100 pod-name
# Logs since timestamp
k logs --since=1h pod-name
k logs --since=30m pod-name
k logs --since=10s pod-name
# Logs since specific time
k logs --since-time=2024-01-15T10:00:00Z pod-name
# Show timestamps
k logs --timestamps pod-name
Terminal window
# Specify container (required for multi-container)
k logs pod-name -c container-name
# All containers
k logs pod-name --all-containers=true
# List containers in pod
k get pod pod-name -o jsonpath='{.spec.containers[*].name}'

Pause and predict: You run kubectl logs my-pod and get no output, but the application is definitely running and processing requests. What is the most likely cause?

Terminal window
# Logs from previous crashed/restarted container
k logs pod-name --previous
k logs pod-name -p
# Previous instance of specific container
k logs pod-name -c container-name --previous

Understanding where Kubernetes looks for logs is critical. If your application isn’t configured correctly, kubectl logs will return nothing, leaving you blind during an outage. Here is how Kubernetes captures log data.

Kubernetes captures:

  • stdout: Standard output from container processes
  • stderr: Standard error from container processes

Applications MUST log to stdout/stderr for kubectl logs to work.

  • Files written inside container (e.g., /var/log/app.log)
  • System logs from the node
  • Logs from init containers (use -c init-container-name)

When you move from running single Pods to scaled applications using Deployments, debugging becomes more complex. If an application has five replicas, checking each Pod’s logs individually is inefficient and error-prone. Kubernetes solves this by allowing you to query logs across multiple Pods simultaneously using label selectors, giving you a unified view of your application’s behavior.

Terminal window
# Logs from all pods with a label
k logs -l app=myapp
# Follow logs from all matching pods
k logs -l app=myapp -f
# Limit to specific number of pods
k logs -l app=myapp --max-log-requests=5
# With tail
k logs -l app=myapp --tail=50

Stop and think: A pod has two containers: app and sidecar. You run kubectl logs my-pod and get an error. Why? What do you need to add to the command?

Terminal window
# Label + container + tail
k logs -l app=myapp -c nginx --tail=100
# Label + since
k logs -l app=myapp --since=30m

Beyond basic retrieval, you will often need to combine kubectl logs with standard Linux tools or specific flags to isolate the information you need. These patterns help you filter noise and capture logs for offline analysis.

Terminal window
# Stream with timestamps
k logs -f --timestamps pod-name
# Stream only errors (grep)
k logs -f pod-name | grep -i error
# Stream from multiple pods
k logs -f -l app=myapp --all-containers

Pause and predict: A pod has been restarting due to CrashLoopBackOff. You need to see what the application printed before it crashed. What flag do you add to kubectl logs?

Terminal window
# Save to file
k logs pod-name > pod-logs.txt
# Save with timestamps
k logs --timestamps pod-name > pod-logs-$(date +%s).txt
# All containers
k logs pod-name --all-containers > all-logs.txt

While human-readable logs are great for manual debugging with kubectl logs, production environments typically use log aggregators (like Fluentd, Fluent Bit, or Loki) to collect and search logs across the entire cluster.

To make logs easily queryable by these tools, applications should implement structured logging by writing JSON formatted output directly to standard output (stdout). This allows log aggregators to parse specific fields—like severity levels, exact timestamps, and request IDs—automatically, without requiring complex regex parsing rules.

Unstructured Log (Hard to parse):

2024-03-10 14:22:01 ERROR Connection failed to db-svc:5432 user=admin

Structured Log (Easy for Fluentd/Loki to index):

{"timestamp":"2024-03-10T14:22:01Z","level":"error","message":"Connection failed","service":"db-svc","port":5432,"user":"admin"}

Modern Kubernetes deployments frequently use the sidecar pattern or initialization containers. When a Pod contains more than one container, Kubernetes needs to know exactly which log stream you want to read.

apiVersion: v1
kind: Pod
metadata:
name: sidecar-demo
spec:
containers:
- name: app
image: nginx
- name: sidecar
image: busybox
command: ['sh', '-c', 'while true; do echo sidecar running; sleep 10; done']
Terminal window
# View main app logs
k logs sidecar-demo -c app
# View sidecar logs
k logs sidecar-demo -c sidecar
# All containers
k logs sidecar-demo --all-containers
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
initContainers:
- name: init-setup
image: busybox
command: ['sh', '-c', 'echo Init complete']
containers:
- name: app
image: nginx
Terminal window
# View init container logs
k logs init-demo -c init-setup

To tie it all together, here is a high-level visualization of how log data flows from your application’s standard output, through the container runtime, and ultimately to your terminal via kubectl logs.

┌─────────────────────────────────────────────────────────────┐
│ Container Logging │
├─────────────────────────────────────────────────────────────┤
│ │
│ Application │
│ │ │
│ ▼ │
│ stdout/stderr ─────────────▶ Container Runtime │
│ │ │
│ ▼ │
│ /var/log/containers/ │
│ /var/log/pods/ │
│ │ │
│ ▼ │
│ kubectl logs │
│ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Pod: my-pod │ │
│ │ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │ Container A │ │ Container B │ │ │
│ │ │ (stdout) │ │ (stdout) │ │ │
│ │ │ (stderr) │ │ (stderr) │ │ │
│ │ └──────────────┘ └──────────────┘ │ │
│ │ │ │ │ │
│ │ ▼ ▼ │ │
│ │ k logs -c a k logs -c b │ │
│ └─────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘

Terminal window
# Essential commands
k logs POD # Basic logs
k logs POD -f # Follow/stream
k logs POD --tail=100 # Last 100 lines
k logs POD --since=1h # Last hour
k logs POD -c CONTAINER # Specific container
k logs POD --previous # Previous instance
k logs POD --all-containers # All containers
k logs -l app=myapp # By label
k logs POD --timestamps # With timestamps

  • Logs are stored on the node at /var/log/containers/ and /var/log/pods/. When a pod is deleted, these logs are eventually cleaned up.

  • There’s no built-in log aggregation in Kubernetes. For production, teams use tools like Fluentd, Fluent Bit, Loki, or Elasticsearch to collect and store logs centrally.

  • Log rotation is handled by the container runtime. By default, Docker/containerd rotates logs to prevent disk overflow, but this means old logs disappear.


MistakeWhy It HurtsSolution
Forgetting -c for multi-containerError: must specify containerList containers first, then specify
Looking for logs from deleted podsLogs are goneUse --previous before pod restarts
App logging to files, not stdoutkubectl logs shows nothingConfigure app to log to stdout
Not using --tail for large logsTerminal floods with dataAlways limit initial output
Ignoring init container logsMiss setup errorsCheck init containers with -c

  1. A pod named payment-service has been crashing and restarting. You need to find out what error caused the last crash, but kubectl logs payment-service only shows the current (freshly started) instance’s logs. How do you retrieve the crash logs?

    Answer Use `kubectl logs payment-service --previous` (or `-p`). This retrieves logs from the previous container instance before the restart. Kubernetes keeps one previous set of logs per container. If the pod has restarted multiple times, you only get the immediately prior instance's logs — earlier crash logs are lost. This is why log aggregation tools (Fluentd, Loki) are essential in production.
  2. A developer reports that kubectl logs my-pod returns an error: “a container name must be specified.” The pod is Running and has no restarts. What is the issue and how do they fix it?

    Answer The pod has multiple containers (likely a sidecar pattern), and `kubectl logs` requires you to specify which container when there's more than one. Fix by adding `-c container-name` to the command. To find available container names, run `kubectl get pod my-pod -o jsonpath='{.spec.containers[*].name}'`. Alternatively, use `--all-containers=true` to see logs from every container in the pod.
  3. You’re debugging a production issue and need to see only the last 30 minutes of logs from all pods in a deployment with label app=checkout. The deployment has 8 replicas. What command do you use, and what pitfall should you watch out for?

    Answer Use `kubectl logs -l app=checkout --since=30m --tail=100`. The pitfall is that by default `kubectl logs` with a label selector only follows up to 5 pods. If you have 8 replicas, you'll miss 3 pods worth of logs. Add `--max-log-requests=10` to increase the limit. Also consider adding `--timestamps` to correlate log entries across pods when debugging timing-sensitive issues.
  4. An application writes its logs to /var/log/app.log inside the container instead of stdout. When you run kubectl logs, you see nothing. The application is confirmed to be running and writing logs. What is wrong and what are two ways to fix it?

    Answer `kubectl logs` only captures stdout and stderr output. Logs written to files inside the container are invisible to Kubernetes. Two fixes: (1) Reconfigure the application to log to stdout/stderr instead of files — this is the recommended Kubernetes pattern. (2) If you can't change the app, add a sidecar container that tails the log file and streams it to its own stdout (e.g., `command: ['sh', '-c', 'tail -F /var/log/app.log']` with a shared volume). Then use `kubectl logs pod -c sidecar` to access the logs.

Task: Practice log retrieval from various pod configurations.

Setup:

Terminal window
# Create a pod that generates logs
cat << 'EOF' | k apply -f -
apiVersion: v1
kind: Pod
metadata:
name: log-demo
labels:
app: log-demo
spec:
containers:
- name: logger
image: busybox
command: ['sh', '-c', 'i=0; while true; do echo "$(date) - Log entry $i"; i=$((i+1)); sleep 2; done']
EOF

Part 1: Basic Logs

Terminal window
# View logs
k logs log-demo
# Follow logs (Ctrl+C to stop)
k logs log-demo -f
# Last 5 lines
k logs log-demo --tail=5
# With timestamps
k logs log-demo --timestamps --tail=5

Part 2: Multi-Container

Terminal window
# Create multi-container pod
cat << 'EOF' | k apply -f -
apiVersion: v1
kind: Pod
metadata:
name: multi-log
spec:
containers:
- name: app
image: nginx
- name: sidecar
image: busybox
command: ['sh', '-c', 'while true; do echo Sidecar log; sleep 5; done']
EOF
# List containers
k get pod multi-log -o jsonpath='{.spec.containers[*].name}'
# View each container
k logs multi-log -c app
k logs multi-log -c sidecar
# All containers
k logs multi-log --all-containers

Part 3: Previous Instance

Terminal window
# Create pod that crashes
cat << 'EOF' | k apply -f -
apiVersion: v1
kind: Pod
metadata:
name: crasher
spec:
containers:
- name: app
image: busybox
command: ['sh', '-c', 'echo "Starting..."; echo "About to crash!"; exit 1']
EOF
# Wait for restart, then check previous logs
k get pod crasher -w
k logs crasher --previous

Cleanup:

Terminal window
k delete pod log-demo multi-log crasher

Terminal window
# Create pod
k run drill1 --image=nginx
# View logs
k logs drill1
# Cleanup
k delete pod drill1
Terminal window
# Create logging pod
k run drill2 --image=busybox -- sh -c 'while true; do echo tick; sleep 1; done'
# Follow (Ctrl+C after a few ticks)
k logs drill2 -f
# Cleanup
k delete pod drill2

Drill 3: Multi-Container (Target: 3 minutes)

Section titled “Drill 3: Multi-Container (Target: 3 minutes)”
Terminal window
cat << 'EOF' | k apply -f -
apiVersion: v1
kind: Pod
metadata:
name: drill3
spec:
containers:
- name: web
image: nginx
- name: monitor
image: busybox
command: ['sh', '-c', 'while true; do echo monitoring; sleep 5; done']
EOF
# Get logs from each
k logs drill3 -c web
k logs drill3 -c monitor
# Cleanup
k delete pod drill3

Drill 4: Label Selection (Target: 2 minutes)

Section titled “Drill 4: Label Selection (Target: 2 minutes)”
Terminal window
# Create multiple pods
k run drill4a --image=nginx -l app=drill4
k run drill4b --image=nginx -l app=drill4
# Logs from all with label
k logs -l app=drill4
# Cleanup
k delete pod -l app=drill4

Drill 5: Previous Instance (Target: 3 minutes)

Section titled “Drill 5: Previous Instance (Target: 3 minutes)”
Terminal window
# Create crashing pod
cat << 'EOF' | k apply -f -
apiVersion: v1
kind: Pod
metadata:
name: drill5
spec:
containers:
- name: app
image: busybox
command: ['sh', '-c', 'echo "Run at $(date)"; sleep 5; exit 1']
EOF
# Watch it crash
k get pod drill5 -w
# After restart, get previous logs
k logs drill5 --previous
# Cleanup
k delete pod drill5

Drill 6: Complete Logging Scenario (Target: 4 minutes)

Section titled “Drill 6: Complete Logging Scenario (Target: 4 minutes)”

Scenario: Debug a failing application using logs.

Terminal window
# Create "broken" deployment
cat << 'EOF' | k apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: drill6
spec:
replicas: 2
selector:
matchLabels:
app: drill6
template:
metadata:
labels:
app: drill6
spec:
containers:
- name: app
image: busybox
command: ['sh', '-c', 'echo "Starting app"; echo "ERROR: Database connection failed"; exit 1']
EOF
# Find pods
k get pods -l app=drill6
# Check logs from one pod
k logs -l app=drill6 --tail=10
# Get previous instance logs
POD=$(k get pods -l app=drill6 -o jsonpath='{.items[0].metadata.name}')
k logs $POD --previous
# Cleanup
k delete deploy drill6

Module 3.3: Debugging in Kubernetes - Troubleshoot pods, containers, and cluster issues.