Skip to content

Module 1.1: What is Kubernetes?

Complexity: [QUICK] - Foundational concepts

Time to Complete: 20-25 minutes

Prerequisites: None


After completing this module, you will be able to:

  1. Explain what Kubernetes is, the problems it solves, and why it became the industry standard
  2. Compare Kubernetes with traditional deployment, VM-based, and other orchestration approaches
  3. Identify the key capabilities Kubernetes provides: self-healing, scaling, service discovery, and rollouts
  4. Evaluate whether a given workload scenario would benefit from Kubernetes orchestration

“What is Kubernetes?” might seem like a simple question, but understanding it deeply is crucial. KCNA tests whether you truly understand Kubernetes’ purpose, not just its features.

The Pokémon GO Trial by Fire: When Pokémon GO launched in 2016, user traffic surged to 50 times their expected maximum. Traditional infrastructure would have collapsed permanently under that load. Instead, because they ran on Kubernetes (specifically Google Kubernetes Engine), the system automatically provisioned thousands of new containers to handle the load across a massive cluster of machines without human intervention. That is the power of Kubernetes.

This module establishes the foundation everything else builds on.


In the past, servers were treated like Pets. You named them (e.g., web-server-01), hand-fed them manual updates, and if one got sick and died, it was an emergency that required hours of manual fixing.

Kubernetes treats infrastructure like Cattle. Application instances are numbered, identical, and if one dies, it is instantly and automatically replaced by another without anyone shedding a tear.

┌─────────────────────────────────────────────────────────────┐
│ BEFORE KUBERNETES │
├─────────────────────────────────────────────────────────────┤
│ │
│ Traditional Deployment: │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Server 1 Server 2 Server 3 │ │
│ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │
│ │ │ App A │ │ App B │ │ App C │ │ │
│ │ └─────────┘ └─────────┘ └─────────┘ │ │
│ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │
│ │ │ OS │ │ OS │ │ OS │ │ │
│ │ └─────────┘ └─────────┘ └─────────┘ │ │
│ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │
│ │ │Hardware │ │Hardware │ │Hardware │ │ │
│ │ └─────────┘ └─────────┘ └─────────┘ │ │
│ └─────────────────────────────────────────────────────┘ │
│ │
│ Problems: │
│ • One app per server = waste │
│ • Server dies = app dies │
│ • Manual scaling │
│ • Deployment is risky │
│ • No standard way to manage │
│ │
└─────────────────────────────────────────────────────────────┘

Kubernetes (K8s) is an open-source container orchestration platform that automates deploying, scaling, and managing containerized applications.

The Orchestra Conductor Analogy: Think of Kubernetes as an orchestra conductor, and containers as the musicians. A musician (container) knows exactly how to play their instrument (run your application code). But the conductor (Kubernetes) tells them when to play, how loud to play (scaling), brings in replacements if someone is sick (self-healing), and ensures the whole symphony works together flawlessly.

TermMeaning
Open-sourceFree, community-driven, no vendor lock-in
Container orchestrationManaging many containers as a system
PlatformFoundation to build on, not just a tool
AutomatesReduces manual work and human error
  • Kubernetes = Greek for “helmsman” or “pilot”
  • K8s = K + 8 letters + s (shorthand)
  • Originated at Google (based on Borg)
  • Donated to CNCF in 2015

┌─────────────────────────────────────────────────────────────┐
│ KUBERNETES CAPABILITIES │
├─────────────────────────────────────────────────────────────┤
│ │
│ 1. DEPLOYMENT │
│ "Run my app" │
│ ├── Deploy containers │
│ ├── Rolling updates │
│ └── Rollback if needed │
│ │
│ 2. SCALING │
│ "Handle more traffic" │
│ ├── Scale up (more replicas) │
│ ├── Scale down (fewer replicas) │
│ └── Autoscale based on metrics │
│ │
│ 3. HEALING │
│ "Keep it running" │
│ ├── Restart failed containers │
│ ├── Replace unhealthy nodes │
│ └── Reschedule if node dies │
│ │
│ 4. SERVICE DISCOVERY │
│ "Let apps find each other" │
│ ├── DNS-based discovery │
│ ├── Load balancing │
│ └── Internal networking │
│ │
│ 5. CONFIGURATION │
│ "Manage settings" │
│ ├── ConfigMaps for config │
│ ├── Secrets for sensitive data │
│ └── Environment injection │
│ │
└─────────────────────────────────────────────────────────────┘

While powerful, Kubernetes isn’t the solution to every problem. You should generally avoid it when:

  • You have a simple, stateless web app with low traffic: A Platform as a Service (PaaS) like Heroku, Vercel, or AWS App Runner is much simpler and often cheaper.
  • Your application is a monolithic legacy system: If your app cannot be easily broken down into containers or relies heavily on a single server’s local state, Kubernetes will only add friction.
  • Your team lacks DevOps expertise: Operating a Kubernetes cluster requires significant technical knowledge. For small teams without dedicated operations staff, managed services are safer.
  • You just need to run simple scheduled scripts: Use basic cron jobs or serverless functions (like AWS Lambda) instead of standing up an entire orchestration cluster.

AspectVirtual MachinesKubernetes + Containers
Startup timeMinutesSeconds
Resource usageHeavy (full OS)Light (shared kernel)
Density~10s per host~100s per host
PortabilityLimitedHigh
ScalingSlowFast

Docker runs containers. Kubernetes orchestrates them:

┌─────────────────────────────────────────────────────────────┐
│ DOCKER vs KUBERNETES │
├─────────────────────────────────────────────────────────────┤
│ │
│ DOCKER: │
│ "Run this container on this machine" │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ docker run nginx │ │
│ │ → Runs ONE container on ONE machine │ │
│ └─────────────────────────────────────────────────────┘ │
│ │
│ KUBERNETES: │
│ "Run 3 copies, keep them healthy, balance traffic" │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Deployment: nginx, replicas: 3 │ │
│ │ → Runs across cluster │ │
│ │ → Self-heals if one dies │ │
│ │ → Load balances traffic │ │
│ │ → Scales up/down automatically │ │
│ └─────────────────────────────────────────────────────┘ │
│ │
│ Docker = Container runtime │
│ Kubernetes = Container orchestrator │
│ │
└─────────────────────────────────────────────────────────────┘

Pause and predict: If Docker can already run containers, why would an organization need Kubernetes on top of it? What problems remain unsolved after you can run a single container on a single machine?

You tell Kubernetes what you want, not how to do it:

# You declare: "I want 3 nginx replicas"
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3 # Desired state
...

Kubernetes continuously works to make reality match your declaration.

┌─────────────────────────────────────────────────────────────┐
│ DESIRED STATE RECONCILIATION │
├─────────────────────────────────────────────────────────────┤
│ │
│ You declare: "I want 3 replicas" │
│ │ │
│ ▼ │
│ Kubernetes: Checks current state │
│ │ │
│ ┌─────────────┴─────────────┐ │
│ │ │ │
│ ▼ ▼ │
│ If 2 running: If 4 running: │
│ "Create 1 more" "Terminate 1" │
│ │
│ This loop runs CONTINUOUSLY │
│ Kubernetes never stops trying to match desired state │
│ │
└─────────────────────────────────────────────────────────────┘

Stop and think: If Kubernetes uses a declarative model and you declare “I want 3 replicas,” what happens when one replica crashes at 3 AM? Who or what restarts it, and how does the system know it should?

  • Containers aren’t modified in place
  • Updates create new containers
  • Old containers are replaced, not changed
  • This ensures consistency and reproducibility

Deploying Kubernetes (Day 1) is just the beginning. The KCNA exam expects you to understand that maintaining a cluster involves ongoing “Day 2 Operations”:

  • Monitoring and Observability: Using tools like Prometheus and Grafana to track resource usage and container health.
  • Security: Implementing Role-Based Access Control (RBAC) and network policies to secure cluster traffic.
  • Maintenance and Upgrades: Performing rolling upgrades of the Kubernetes cluster itself to patch vulnerabilities.
  • Cost Management: Ensuring workloads are requesting the right amount of CPU/Memory so you aren’t wasting cloud spend on idle resources.

Kubernetes can run:

EnvironmentExamples
Public CloudEKS (AWS), GKE (Google), AKS (Azure)
Private CloudOpenStack, VMware
On-PremisesBare metal, data centers
EdgeRetail stores, factories
Developmentminikube, kind, Docker Desktop

This flexibility is a key advantage—no vendor lock-in.


  • Kubernetes came from Google’s Borg - Google ran Borg internally for 10+ years before creating Kubernetes as an open-source version.

  • K8s handles massive scale - Large clusters can run 5,000+ nodes and 150,000+ pods.

  • Most CNCF projects integrate with K8s - Kubernetes is the foundation of the cloud native ecosystem.

  • The release cycle is predictable - New Kubernetes versions come out every ~4 months with a 14-month support window.


MistakeWhy It HurtsCorrect Understanding
”K8s is just Docker”Missing orchestration valueK8s orchestrates containers; Docker runs them
”K8s replaces VMs”They serve different purposesK8s runs ON VMs or bare metal
”K8s is only for big companies”Missing development valueK8s works at any scale
”K8s is a container runtime”Confusing layersK8s uses runtimes like containerd
”K8s makes apps instantly scalable”Architecture mattersApps must be designed to scale horizontally (stateless)
“K8s is a managed database”K8s is for compute, not stateRunning stateful databases in K8s is complex; often better to use managed DBs
”Deploying K8s means no more ops”Shift in complexityK8s shifts ops from hardware to software configuration, introducing new Day 2 challenges

While we won’t build a real cluster until the next module, let’s practice thinking declaratively—the core mindset of a Kubernetes administrator.

Scenario: You are the “Kubernetes Control Plane” for a new web store. You need to simulate how Kubernetes handles state reconciliation.

  • Step 1: Declare your state. On a piece of paper or in a text file, write your desired state: “3 Web Frontend instances, 1 Payment Processor instance, and 2 Cache instances.”
  • Step 2: Initial Deployment. Draw these 6 instances as boxes. You have successfully “deployed” your application and reality matches your desired state.
  • Step 3: Handle a Failure. Cross out one Web Frontend instance (simulate a crash!).
  • Step 4: Reconcile. Look at your desired state (3 Web Frontends). Look at your current state (2 Web Frontends). What must you do? Draw a new Web Frontend to replace the dead one. You are acting as the continuous reconciliation loop!
  • Step 5: Scale Up. Black Friday arrives. Change your desired state to “10 Web Frontend instances.” Now draw the additional 7 instances required to match reality to your declaration.

Success Criteria:

  • You understand that the system constantly compares “what you want” vs “what exists.”
  • You recognize that you don’t write an imperative script saying “add 7 instances,” you simply update the total desired number.
  • You can explain why this declarative approach is superior to manual server management.

  1. Your team is evaluating container orchestration tools. A colleague suggests Docker Swarm is simpler to set up. What advantages does Kubernetes offer that might justify the added complexity?

    Answer Kubernetes offers a much larger ecosystem (CNCF projects, Helm charts, operators), a declarative configuration model with continuous reconciliation, robust autoscaling (HPA, VPA, Cluster Autoscaler), a rich scheduling system (affinity, taints, tolerations), and extensive community support. While Docker Swarm is simpler, Kubernetes provides self-healing, rolling updates with rollback, service discovery via DNS, and is the industry standard supported by all major cloud providers. The complexity pays off at scale and when you need production-grade features.
  2. A startup is running their web application on a single VM. Traffic is growing, and they are considering Kubernetes. What specific problems would Kubernetes solve that their current single-VM setup cannot handle?

    Answer A single VM creates a single point of failure -- if it goes down, the entire application is unavailable. Kubernetes solves this by running multiple replicas across multiple nodes, providing self-healing (automatic restart and rescheduling), horizontal scaling to handle traffic spikes, rolling updates for zero-downtime deployments, and service discovery so components can find each other. It also enables efficient resource utilization by bin-packing containers onto nodes.
  3. A new hire asks: “Isn’t Kubernetes just a container runtime like Docker?” How would you explain the difference between the two?

    Answer Docker (specifically containerd) is a container runtime -- it knows how to create, start, and stop a single container on a single machine. Kubernetes is a container orchestrator -- it manages containers across a cluster of machines. Kubernetes decides where containers run, restarts them if they crash, scales them based on demand, provides networking between them, and ensures the system matches your declared desired state. Kubernetes actually uses a container runtime (like containerd) under the hood to do the actual work of running containers.
  4. You declare a Deployment with 3 replicas, but currently 4 pods are running due to a previous manual scaling operation. What will Kubernetes do, and why?

    Answer Kubernetes will terminate one pod to bring the count down to 3. This is the declarative model at work: you declared the desired state (3 replicas), and Kubernetes continuously reconciles the current state to match. The reconciliation loop checks: "Are there 3 pods? No, there are 4. Action: terminate 1." This loop runs continuously, which is why Kubernetes is self-healing -- it constantly tries to make reality match your declaration.
  5. Your company wants to avoid vendor lock-in. How does Kubernetes help with this goal, and what is the historical reason it was designed this way?

    Answer Kubernetes runs on any infrastructure -- public cloud (EKS, GKE, AKS), private cloud, on-premises bare metal, or even developer laptops. Your application manifests are portable across environments. This is by design: Google donated Kubernetes to the vendor-neutral CNCF in 2015 specifically to prevent any single company from controlling it. The same Deployment YAML works on AWS, GCP, Azure, or your own data center, meaning you can move workloads between providers without rewriting your application configuration.
  6. Your lead developer suggests moving your simple, low-traffic blog (currently on WordPress) to a multi-node Kubernetes cluster to “future-proof” it. What is the most compelling argument against this plan?

    Answer The operational overhead of Kubernetes far outweighs the benefits for a simple, low-traffic blog. Kubernetes introduces significant complexity (managing certificates, ingress, storage classes, and cluster upgrades). For a simple workload that doesn't need rapid auto-scaling, self-healing, or microservice service discovery, a PaaS (Platform as a Service) or simple VM is much more cost-effective and easier to maintain. This represents a case of choosing the wrong tool for the job.
  7. During a deployment, a developer updates the configuration to use a new container image for the shopping cart service. Instead of modifying the running containers, the system creates entirely new containers and destroys the old ones. A junior engineer asks if this is a bug. How do you explain this behavior?

    Answer This is not a bug; it is the principle of Immutable Infrastructure in action. In Kubernetes, containers are never modified in place (e.g., you don't SSH into a container and `git pull` new code). Instead, updates are performed by replacing old containers with new ones built from the new image. This ensures consistency, prevents configuration drift, and allows for reliable rollbacks if the new version fails.
  8. Your team successfully deployed a Kubernetes cluster and your apps are running. Management assumes the DevOps work is “done.” As a Kubernetes administrator, what “Day 2 Operations” must you plan for that management might be overlooking?

    Answer While Day 1 is about getting the cluster running, Day 2 Operations represent the ongoing lifecycle management of the platform. This includes setting up robust monitoring and alerting (so you know if pods crash), managing security (network policies, RBAC, image scanning), planning for cluster upgrades (as Kubernetes releases new versions every few months), and optimizing costs (ensuring workloads are correctly sized so you aren't paying for idle cloud resources). Kubernetes requires active and ongoing maintenance.

Kubernetes is:

  • An open-source container orchestration platform
  • Automates deployment, scaling, and management
  • Uses declarative configuration (desired state)
  • Runs anywhere (cloud, on-prem, edge)

Kubernetes does:

  • Deploys and updates containers
  • Scales applications up and down
  • Self-heals when things fail
  • Provides service discovery
  • Manages configuration

Key insight: Kubernetes is NOT a container runtime—it orchestrates containers that runtimes like containerd actually run.


Module 1.2: Container Fundamentals - Understanding containers before diving into Kubernetes architecture.