Module 1.7: Namespaces and Labels
Module 1.7: Namespaces and Labels
Section titled “Module 1.7: Namespaces and Labels”Complexity: [MEDIUM]
Time to Complete: 35-40 minutes
Prerequisites: Module 1.4: Deployments, Module 1.5: Services
Learning Outcomes
Section titled “Learning Outcomes”After completing this module, you will be able to:
- Segment a monolithic Kubernetes cluster into isolated logical environments using Namespaces and context switching.
- Design label taxonomies that enable targeted resource selection for Services, Deployments, and operational tooling.
- Differentiate between equality-based and set-based selectors to query and manipulate specific subsets of resources.
- Defend cluster stability by implementing namespace-level ResourceQuotas and LimitRanges to prevent resource starvation.
- Contrast the use cases of Labels versus Annotations for attaching metadata to Kubernetes objects.
Why This Module Matters
Section titled “Why This Module Matters”It is 2:00 AM on a Friday, and the platform engineering team is paged for a catastrophic cluster failure. The primary database backing the production e-commerce site has been evicted, and critical internal APIs are failing. The root cause? A junior developer deployed a load-testing tool in the default namespace without resource limits, which consumed 95% of the cluster’s CPU and memory. Because there was no logical isolation or resource quota enforcing boundaries between the development testing tools and the production workloads sharing the same cluster, a simple mistake brought down the entire business.
As your Kubernetes adoption grows, the physical cluster becomes a shared commodity. Running separate physical clusters for every team, environment (dev, staging, prod), and project is financially ruinous and operationally complex. Instead, you need a way to slice a single large cluster into dozens of “virtual clusters,” where teams can operate autonomously without stepping on each other’s toes or hoarding resources. You also need a standardized way to organize, search, and link thousands of disparate objects running inside those slices.
Namespaces are the walls that divide your cluster into manageable rooms. Labels are the sticky notes you attach to everything inside those rooms so you can find and connect them. Mastering these two concepts is the difference between a chaotic, fragile cluster where one command accidentally deletes production, and a highly structured, multi-tenant platform where blast radii are strictly contained.
Section 1: Namespaces – The Virtual Clusters
Section titled “Section 1: Namespaces – The Virtual Clusters”At its core, a Namespace is a logical partition within a Kubernetes cluster. When you first install Kubernetes, you are dropped into a flat landscape where every Pod, Service, and ConfigMap lives side-by-side. As the number of resources grows, naming collisions become inevitable (you can only have one Service named redis in a given namespace), and managing permissions becomes a nightmare.
Namespaces solve this by providing scope for names. You can have a redis Service in the frontend namespace and a completely different redis Service in the backend namespace.
What Namespaces Isolate (and What They Do Not)
Section titled “What Namespaces Isolate (and What They Do Not)”It is crucial to understand exactly what a Namespace provides out of the box, because many engineers assume it provides strict security isolation. It does not.
graph TD subgraph "Kubernetes Cluster" subgraph "Cluster-Scoped Resources (No Namespace)" N1[Node 1] N2[Node 2] PV[Persistent Volumes] CR[ClusterRoles] end
subgraph "Namespace: prod-frontend" P1[Pod: web-prod] S1[Service: web-svc] R1[Role: frontend-admin] end
subgraph "Namespace: dev-backend" P2[Pod: api-dev] S2[Service: api-svc] R2[Role: backend-dev] end
P1 -. "Network traffic is ALLOWED by default!" .-> P2 endNamespaces DO isolate:
- Naming: Resource names must be unique within a namespace, but not across namespaces.
- DNS Records: Kubernetes assigns DNS names to Services in the format
<service-name>.<namespace>.svc.cluster.local. This means a Pod in namespaceprod-frontendcan talk to a Service in namespacedev-backendby querying its fully qualified domain name (FQDN). - Role-Based Access Control (RBAC): You can grant a user full admin rights in the
devnamespace while giving them zero access to theprodnamespace using RoleBindings. - Resource Quotas: You can restrict the total amount of CPU, Memory, or Storage that can be consumed by all resources combined within a single namespace.
Namespaces DO NOT isolate:
- Network Traffic: By default, the Kubernetes network is entirely flat. A Pod in the
devnamespace can ping, port-scan, and connect to a Pod in theprodnamespace. Namespaces do not provide network segmentation. (To isolate traffic, you must implementNetworkPolicyobjects). - Node Placement: Unless configured otherwise with NodeSelectors or Taints, Pods from different namespaces will be scheduled onto the exact same underlying worker nodes, sharing the same host kernel and container runtime.
- Cluster-Scoped Resources: Certain foundational resources are not bound to any namespace. Examples include Nodes, PersistentVolumes (the storage itself, not the claim), StorageClasses, and ClusterRoles.
The Default Namespaces
Section titled “The Default Namespaces”When you spin up a fresh cluster, Kubernetes pre-populates it with four standard namespaces:
default: The catch-all namespace. If you runkubectl applywithout specifying a namespace, it goes here. In a mature production cluster, thedefaultnamespace should ideally be empty, as all workloads should be deployed to explicit, purpose-built namespaces.kube-system: The engine room. This is where the Kubernetes control plane components run. If you inspect this namespace, you will see critical Pods like CoreDNS (for service discovery), kube-proxy (for network routing), and your CNI plugin (like Calico or Cilium). You should rarely modify resources here unless you are operating the cluster infrastructure itself.kube-public: An automatically created namespace intended for resources that should be readable by all users (including unauthenticated ones). It was historically used to bootstrap clusters, but is rarely used in modern setups.kube-node-lease: Contains Lease objects associated with each node. These leases act as heartbeats. Instead of the control plane constantly pinging massive Node status objects, each node simply updates a tiny Lease object in this namespace every few seconds. If a lease expires, the node is considered dead.
Interacting with Namespaces
Section titled “Interacting with Namespaces”Managing namespaces via the command line is straightforward, but constantly appending -n <namespace> to every kubectl command is tedious and prone to error.
To create a namespace imperatively:
kubectl create namespace team-frontendTo list all namespaces:
kubectl get namespacesTo run a command against a specific namespace:
kubectl get pods -n team-frontendContext Switching:
Instead of specifying the namespace on every command, you can change your default namespace context within your kubeconfig file.
# View your current context configuration to see your active namespacekubectl config view --minify | grep namespace:
# Set the default namespace for your current context to 'team-frontend'kubectl config set-context --current --namespace=team-frontend
# Now this command automatically looks in 'team-frontend'kubectl get pods(War Story: A senior engineer once meant to delete a broken deployment in the staging namespace. They forgot the -n staging flag. Their context was set to production. They deleted the primary ingress controller for the entire company. Always verify your context before running destructive commands, and use tools like kubectx and kubens to make context switching visible and safe.)
Active Learning Prompt 1
Section titled “Active Learning Prompt 1”Scenario: You are trying to deploy a new monitoring agent that needs to discover all Nodes in the cluster. You define the DaemonSet in the monitoring namespace. However, when you try to list the nodes using a Role bound to that namespace, it fails with a permissions error.
Question: Based on how namespaces partition resources, why is your agent failing to list the Nodes, and how would you conceptually fix it?
Click for the Answer
Nodes are cluster-scoped resources, meaning they do not belong to any namespace. ARole and RoleBinding only grant permissions within a specific namespace. To grant permissions to list Nodes, you must use a ClusterRole and a ClusterRoleBinding, which operate outside the boundaries of namespaces and apply cluster-wide.
Section 2: Labels and Selectors – The Glue of Kubernetes
Section titled “Section 2: Labels and Selectors – The Glue of Kubernetes”If Namespaces are the walls, Labels are the organization system. Labels are simply key-value pairs attached to the metadata of Kubernetes objects. While names and UIDs identify objects uniquely, labels identify objects logically.
Kubernetes is a highly decoupled system. Deployments do not “own” Pods through a hardcoded list of IDs in a database. Services do not route traffic to Pods based on static IP addresses. Instead, they find each other dynamically using Label Selectors. This design allows the cluster to be incredibly dynamic—Pods can die and be recreated with new IPs and new names, and as long as they possess the correct labels, the system instantly self-heals the connections.
Label Naming Conventions
Section titled “Label Naming Conventions”A label key consists of an optional prefix and a name, separated by a slash (/).
- Prefix: Must be a valid DNS subdomain (e.g.,
company.com/). Thekubernetes.io/andk8s.io/prefixes are reserved for core Kubernetes components. Using a prefix is best practice for custom labels to avoid collisions with other tools. - Name: Must be 63 characters or less, beginning and ending with an alphanumeric character.
- Value: Must be 63 characters or less (can be empty).
Real-World Labeling Strategy: Do not randomly label resources. Adopt a standardized taxonomy across your organization. The Kubernetes documentation recommends a set of standard labels, but you can define your own. A mature labeling strategy enables granular cost allocation, security auditing, and operational triage.
apiVersion: v1kind: Podmetadata: name: payment-processor-v2 namespace: team-backend labels: app.kubernetes.io/name: payment-processor app.kubernetes.io/version: "2.1.4" app.kubernetes.io/part-of: e-commerce-suite tier: backend environment: production cost-center: "finance-ops"spec: containers: - name: app image: payment-app:2.1.4With these labels in place, you can query your cluster exactly like a database.
# Find all backend podskubectl get pods -l tier=backend
# Find all production payment processorskubectl get pods -l app.kubernetes.io/name=payment-processor,environment=productionEquality-Based vs. Set-Based Selectors
Section titled “Equality-Based vs. Set-Based Selectors”When querying or linking resources, Kubernetes supports two types of selectors. It is critical to know which resources support which type.
1. Equality-Based Selectors
These allow filtering by exact matches. Operators are =, ==, and !=. Multiple requirements are separated by commas and act as a logical AND.
# Select pods where environment is exactly 'production'kubectl get pods -l environment=production
# Select pods where tier is NOT 'frontend'kubectl get pods -l tier!=frontendIn YAML, Services currently only support equality-based selectors:
apiVersion: v1kind: Servicemetadata: name: payment-svcspec: selector: app.kubernetes.io/name: payment-processor environment: productionStop and think: You need to find all pods in the
productionorstagingenvironment that are NOT part of thecachetier. Write thekubectlselector expression to achieve this before reading the next section.
2. Set-Based Selectors
These allow filtering according to a set of values, enabling much more expressive queries. Operators are in, notin, and exists (checking if the key exists, regardless of value).
# Select pods where environment is either 'production' OR 'staging'kubectl get pods -l 'environment in (production, staging)'
# Select pods that have the 'release' label, regardless of its valuekubectl get pods -l releaseIn YAML, newer controllers like Deployments and Jobs use set-based selectors via matchExpressions, while still supporting equality via matchLabels:
apiVersion: apps/v1kind: Deploymentmetadata: name: payment-deployspec: selector: matchLabels: app.kubernetes.io/name: payment-processor matchExpressions: - {key: environment, operator: In, values: [production, staging]} - {key: release, operator: Exists}The Power of Decoupling: Canary Deployments
Section titled “The Power of Decoupling: Canary Deployments”Because Services route traffic based purely on labels, you can manipulate traffic flow without touching the Service itself. This is the foundation of advanced deployment strategies like blue/green and canary releases.
Imagine you have a Service selecting app=frontend. You have a Deployment with 10 Pods labeled app=frontend, version=v1. All traffic flows to v1.
You want to test v2 with a small amount of live traffic. You simply deploy a single Pod (or a small Deployment) labeled app=frontend, version=v2.
Because both versions share the app=frontend label, the Service will automatically begin routing ~10% of traffic (1 out of 11 Pods) to the new v2 Pod. If it fails, you delete the v2 Pod. If it succeeds, you scale up v2 and scale down v1. The Service never knew what happened; it just routed to whatever matched its selector.
graph TD User-->SVC[Service\nselector: app=frontend]
subgraph "Deployment v1 (10 Replicas)" P1[Pod\napp=frontend\nversion=v1] P2[Pod\napp=frontend\nversion=v1] P3[...] end
subgraph "Deployment v2 (1 Replica - Canary)" P4[Pod\napp=frontend\nversion=v2] end
SVC-->P1 SVC-->P2 SVC-->P3 SVC-. "10% Traffic" .->P4Section 3: Annotations – Metadata for Machines
Section titled “Section 3: Annotations – Metadata for Machines”Labels are used by Kubernetes to select and group objects. Annotations, on the other hand, are used to attach arbitrary, non-identifying metadata to objects.
If you try to put a 500-character JSON string into a Label, Kubernetes will reject it. Labels have strict length limits and are indexed by the API server for incredibly fast querying. Annotations are not indexed, cannot be used to select objects, and have a massive size limit (256KB).
Pause and predict: Look at these 5 metadata items: 1) team name for filtering, 2) Git SHA of the commit, 3) Prometheus scrape config (
true), 4) cost center for billing grouping, 5) SSL certificate directive for an ingress controller. Which ones should be Labels and which should be Annotations? Why?
When to use Labels:
- Grouping (e.g.,
tier: frontend) - Selecting (e.g., linking a Service to Pods, or a Deployment to ReplicaSets)
- Filtering output in
kubectl
When to use Annotations:
- Storing build/release information (e.g., the Git commit SHA that triggered the deployment)
- Providing configuration directives to external controllers (e.g., telling an Ingress Controller which SSL certificate to use, or telling a cloud provider to provision an internal load balancer)
- Adding contact information for the team responsible for the resource
apiVersion: v1kind: Podmetadata: name: reporting-job labels: app: reporting annotations: builder/author: "jane.doe@company.com" build/commit-sha: "a1b2c3d4e5f6g7h8i9j0" prometheus.io/scrape: "true" prometheus.io/port: "8080"In the example above, Prometheus (a monitoring tool running in the cluster) will scan the API server, look for the prometheus.io/scrape annotation on Pods, and know exactly how to pull metrics from this specific pod, without cluttering the indexed Labels.
Active Learning Prompt 2
Section titled “Active Learning Prompt 2”Scenario: You are deploying a new internal wiki. You want the IT department to easily find it using kubectl get pods -l dept=it. You also need to pass a 400-character JSON configuration string to a third-party backup tool that watches the cluster for new deployments.
Task: Write the metadata section of the Pod YAML that satisfies both requirements.
Click for the Answer
metadata: name: internal-wiki labels: dept: it annotations: backup.company.com/config: '{"schedule": "0 2 * * *", "retention_days": 30, "storage_class": "cold-archive", "exclude_paths": ["/tmp", "/var/cache"]}'Explanation: The dept=it requirement is used for selection/filtering, so it must be a Label. The 400-character JSON string is non-identifying configuration for an external tool and exceeds label limits, so it must be an Annotation.
Section 4: ResourceQuotas and LimitRanges
Section titled “Section 4: ResourceQuotas and LimitRanges”If you give a team a Namespace, they now have a sandbox. But without limits, a child can build a sandcastle so big it spills out of the sandbox and crushes the rest of the playground.
In Kubernetes, resource management happens in two phases:
- Requests: What a Pod says it needs to run. The
kube-scheduleruses requests to find a Node with enough available capacity to fit the Pod. - Limits: The hard ceiling on what a Pod can actually consume at runtime. If a Pod uses more memory than its limit, the
kubeletterminates it (OOMKilled). If it tries to use more CPU, it is throttled.
By default, a Pod in any namespace can consume as much CPU and memory as the underlying node possesses. To prevent the “noisy neighbor” problem—where one runaway application starves every other application on the cluster—you must enforce boundaries at the Namespace level.
ResourceQuotas: The Namespace Budget
Section titled “ResourceQuotas: The Namespace Budget”A ResourceQuota limits the total aggregate consumption of resources across an entire namespace. It acts as a strict budget.
apiVersion: v1kind: ResourceQuotametadata: name: frontend-quota namespace: team-frontendspec: hard: requests.cpu: "4" # Total CPU requests cannot exceed 4 cores requests.memory: "8Gi" # Total memory requests cannot exceed 8 Gigabytes limits.cpu: "8" # Total CPU limits cannot exceed 8 cores limits.memory: "16Gi" # Total memory limits cannot exceed 16 Gigabytes pods: "20" # Cannot create more than 20 pods total services.loadbalancers: "2" # Cannot request more than 2 cloud LoadBalancersIf the team-frontend namespace currently uses 3.5 CPU cores, and a developer tries to deploy a new Pod requesting 1 CPU core, the API server will reject the Pod creation with a 403 Forbidden error, stating the quota has been exceeded.
LimitRanges: The Default Guardrails
Section titled “LimitRanges: The Default Guardrails”ResourceQuotas have a severe catch: Once you apply a quota for CPU or Memory to a namespace, every single Pod created in that namespace MUST specify CPU and Memory requests/limits. If a Pod omits them, it will be instantly rejected by the admission controller.
Developers often forget to add these blocks to their YAML. To prevent frustration, you use a LimitRange. A LimitRange automatically injects default CPU/Memory requests and limits into any Pod that forgets to declare them. It also sets minimum and maximum boundaries for a single Pod to prevent someone from deploying a Pod that takes up the entire namespace quota.
apiVersion: v1kind: LimitRangemetadata: name: frontend-limits namespace: team-frontendspec: limits: - default: cpu: "500m" # If limit is omitted, inject this memory: "512Mi" defaultRequest: cpu: "100m" # If request is omitted, inject this memory: "256Mi" max: cpu: "2" # No single container can request/limit more than 2 CPU memory: "4Gi" min: cpu: "10m" # No single container can request/limit less than 10m CPU memory: "64Mi" type: ContainerThe Interplay: The LimitRange ensures every Pod has a defined size (either explicitly or by default injection). The ResourceQuota ensures the sum of all those sizes does not exceed the namespace’s total budget. You should never implement a ResourceQuota without a LimitRange.
Stop and think: A namespace has a ResourceQuota of 4 CPU cores and a LimitRange default of 500m CPU. A developer deploys 7 Pods without specifying resources. What happens when they try to deploy an 8th? Walk through the math.
Did You Know?
Section titled “Did You Know?”- Labels are strictly strings: Even if you type
version: 1.0in your YAML, Kubernetes will treat it as a string. However, unquoted numbers can cause parsing errors in some YAML parsers (e.g. interpreting1.0as a float), which is why it is best practice to always quote label values:version: "1.0". - A Namespace cannot be deleted quickly: When you run
kubectl delete namespace <name>, the namespace enters aTerminatingstate. It will hang in this state until every single object inside it is successfully garbage collected. If a resource has a broken finalizer that hangs, the namespace deletion will block indefinitely until the finalizer is manually patched. - The 63-character limit is historical: The 63-character limit for label keys and values originates from the DNS standard RFC 1123, which restricts DNS labels to 63 characters. Kubernetes adopted this to ensure labels could safely map to DNS structures and internal naming conventions.
- Annotations can hold 256KB: While label values are severely restricted, a single annotation value can hold up to 256 kilobytes of data. This makes it large enough to store entire configuration files, small shell scripts, or massive JSON policy documents.
Common Mistakes
Section titled “Common Mistakes”| Mistake | Why It Happens | How To Fix It |
|---|---|---|
| Assuming Namespaces isolate network traffic | Namespaces provide logical isolation, but the underlying overlay network is flat by default. Any pod can ping any pod. | Implement Kubernetes NetworkPolicy objects to explicitly deny cross-namespace ingress/egress traffic. |
| Using Equality-Based Selectors in Deployments | The matchLabels block in a Deployment looks like equality, but Deployments actually require the more robust set-based selector engine under the hood to manage ReplicaSets. | Use matchLabels (which converts to In) or explicitly use matchExpressions in apps/v1 Deployments. |
| Storing large JSON configs in Labels | Misunderstanding the purpose of labels vs annotations. Labels have strict length limits and are indexed, causing API server bloat if misused. | Move any non-identifying metadata, configuration blobs, or human-readable notes into annotations. |
| Creating ResourceQuotas without LimitRanges | The cluster admin locks down the namespace with a quota, causing all existing CI/CD pipelines to fail because the Pods lack explicit resource requests. | Always deploy a LimitRange with default values before or alongside enforcing a ResourceQuota. |
Creating objects in default namespace | Laziness or lack of context awareness. Leads to a cluttered, unmanageable cluster where blast radii overlap heavily. | Always define namespace: <name> in your YAML metadata, or use tools like kubens to strictly set your context. |
| Mutating immutable label selectors | Trying to change the matchLabels selector of an existing Deployment. The API server will reject it because the Deployment would lose track of its existing ReplicaSets. | You must delete the Deployment (and its associated resources) and recreate it if the core selector taxonomy changes. |
Question 1: You have a namespace called analytics. You deploy a Service named data-sink in this namespace. What is the fully qualified DNS name (FQDN) that a Pod in the default namespace should use to connect to it?
Answer: data-sink.analytics.svc.cluster.local. By default, Kubernetes DNS searches for services within the same namespace as the querying Pod. Because the querying Pod is in the default namespace and the target Service is in the analytics namespace, a short name like data-sink will fail to resolve. To cross the logical namespace boundary, the Pod must provide the fully qualified domain name (FQDN), which explicitly includes the target namespace, the svc subdomain, and the cluster's base domain.
Question 2: A severe latency issue is affecting your user-facing applications. You need to immediately restart all Pods that belong to either the frontend or the cache tier to clear their memory, but you must leave the database and backend tiers alone. What kubectl selector command would you run to target exactly these two tiers?
Answer: You would use the set-based selector command: kubectl delete pods -l 'tier in (frontend, cache)'. Equality-based selectors (like tier=frontend) only support exact matches and logical AND operations, meaning they cannot express an "OR" condition. Set-based selectors introduce operators like in, notin, and exists, allowing you to evaluate multiple potential values for a single label key in a single query.
Question 3: A developer complains that their Pod is stuck in a Pending state, and the event log says forbidden: exceeded quota: pod-quota, requested: pods=1, used: 10, limited: 10. What is the root cause?
Answer: The namespace has a ResourceQuota named pod-quota that enforces a hard limit of exactly 10 Pods for the entire namespace. The developer is attempting to deploy an 11th Pod, but the admission controller intercepts the request and calculates that it would violate the quota. Consequently, the API server rejects the creation of the Pod outright, leaving the deployment controller unable to fulfill the requested state until an existing Pod is deleted or the quota is increased.
Question 4: You need to store the email address of the team responsible for a Deployment so that a custom alerting script can notify them if the Deployment fails. Should you use a Label or an Annotation?
Answer: You should use an Annotation. Labels are designed strictly for identifying and grouping objects within Kubernetes, and they are indexed by the API server to facilitate fast querying (like matching a Service to Pods). An email address is non-identifying metadata intended for external tools or human operators, and you will never natively ask Kubernetes to "select all pods by this email address." Using an Annotation prevents cluttering the API server's index while providing ample space (up to 256KB) for the data.
Question 5: You apply a new LimitRange with a default CPU limit of 200m to a namespace that already has 10 running Pods. These existing Pods were deployed without any resource limits. A developer panics, arguing that their currently running CPU-heavy batch jobs will immediately be throttled and fail. Are they correct?
Answer: No, the developer is incorrect. Kubernetes evaluates LimitRange (and ResourceQuota) policies strictly via Admission Controllers at the exact moment an object is created or updated. Existing Pods that are already running are completely ignored by these new rules and will not be retroactively modified, throttled, or evicted. The new 200m CPU limit will only apply to new Pods deployed after the LimitRange was created, or if the existing Pods are restarted.
Question 6: A junior developer needs to be able to delete and restart Pods to troubleshoot their application. However, security policies dictate they must not have this permission anywhere except within the sandbox namespace. Can namespaces provide this level of security isolation?
Answer: Yes, namespaces excel at this specific type of logical security isolation. Namespaces natively integrate with Kubernetes Role-Based Access Control (RBAC). By creating a Role that permits Pod deletion, and a RoleBinding that assigns this Role to the developer specifically within the sandbox namespace, you restrict their permissions entirely to that boundary. They will have no access to view or delete Pods in default, production, or any other namespace.
Hands-On Exercise: The Multi-Tenant Sandbox
Section titled “Hands-On Exercise: The Multi-Tenant Sandbox”In this exercise, you will create an isolated namespace, establish resource guardrails, and use labels to orchestrate a simulated canary deployment.
Task 1: Create the Isolation Zone
Section titled “Task 1: Create the Isolation Zone”Create a new namespace for a simulated development team and switch your active context to it.
Solution
# Create the namespacekubectl create namespace alpha-team
# Switch your context to the new namespacekubectl config set-context --current --namespace=alpha-team
# Verify you are in the new namespacekubectl config view --minify | grep namespace:Task 2: Establish the Guardrails
Section titled “Task 2: Establish the Guardrails”Apply a LimitRange to the alpha-team namespace to ensure no developer can deploy a Pod without bounds, and no single Pod can consume more than 500m CPU.
- Create a file named
guardrails.yaml. - Define a
LimitRangesetting the default CPU request to100m, default limit to200m, and max limit to500m. - Apply it.
Solution
apiVersion: v1kind: LimitRangemetadata: name: cpu-guardrails namespace: alpha-teamspec: limits: - default: cpu: "200m" defaultRequest: cpu: "100m" max: cpu: "500m" type: Containerkubectl apply -f guardrails.yaml
# Verify the LimitRangekubectl describe limitrange cpu-guardrailsTask 3: Deploy the Stable Release
Section titled “Task 3: Deploy the Stable Release”Deploy a simple NGINX application representing the stable version of your app. It should consist of 3 replicas.
Label the deployment with app: web and version: v1.
Solution
apiVersion: apps/v1kind: Deploymentmetadata: name: web-v1spec: replicas: 3 selector: matchLabels: app: web version: v1 template: metadata: labels: app: web version: v1 spec: containers: - name: nginx image: nginx:1.24-alpinekubectl apply -f web-v1.yamlkubectl get pods --show-labels(Notice that even though you didn’t specify CPU limits in the Pod template, if you kubectl describe the pods, they will have the 200m limit automatically injected by the LimitRange you created in Task 2).
Task 4: Expose the App
Section titled “Task 4: Expose the App”Create a Service named web-svc that routes traffic to your application. It should select traffic based only on the app: web label (ignoring the version).
Solution
apiVersion: v1kind: Servicemetadata: name: web-svcspec: selector: app: web ports: - port: 80 targetPort: 80kubectl apply -f web-svc.yaml
# Verify the service is finding the 3 v1 podskubectl get endpoints web-svcTask 5: The Canary Release
Section titled “Task 5: The Canary Release”Deploy a single Pod representing the new v2 version of your application. Label it with app: web and version: v2.
Observe how the Service handles this new Pod.
Solution
apiVersion: v1kind: Podmetadata: name: web-v2-canary labels: app: web version: v2spec: containers: - name: nginx image: nginx:1.25-alpinekubectl apply -f web-v2-canary.yaml
# Check the endpoints againkubectl get endpoints web-svcObservation: The Service’s endpoint list now contains 4 IP addresses. Because the Service selector only looks for app: web, and both v1 and v2 have that label, the Service automatically load-balances traffic across both versions. You have successfully executed a basic canary release!
Task 6: Cleanup
Section titled “Task 6: Cleanup”Delete the namespace to instantly destroy all resources created in this exercise, and return to the default namespace.
Solution
# Delete the namespace (this takes a few moments as it cleans up everything inside it)kubectl delete namespace alpha-team
# Reset your context back to defaultkubectl config set-context --current --namespace=defaultNext Module
Section titled “Next Module”You have now mastered the art of organizing and isolating resources. You understand how controllers find Pods dynamically, and how namespaces keep teams from colliding. However, creating these resources via imperative kubectl create commands is a path to unmaintainable, brittle infrastructure. It is time to speak the native language of the Kubernetes API.
Proceed to Module 1.8: YAML for Kubernetes — Master the language Kubernetes speaks to learn how to declare your desired state in code, making your infrastructure repeatable, version-controllable, and bulletproof.