Module 2.1: ArgoCD
Toolkit Track | Complexity:
[COMPLEX]| Time: 45-50 min
The on-call engineer’s phone buzzed at 2 AM: “Production is down.” She SSH’d into the bastion, ran kubectl get pods—everything looked fine. But customers were seeing errors. After two hours of frantic debugging, she discovered it: someone had manually scaled the payment service to zero replicas “for testing” three days ago, then forgot to scale it back. No ticket. No pull request. No audit trail. The change existed only in cluster state, invisible to monitoring, unknown to the team. That night cost the e-commerce platform $890,000 in lost Black Friday pre-orders. The next Monday, the CTO demanded answers. “How do we prevent this from ever happening again?” The answer was GitOps—and ArgoCD became the tool that would transform their deployment culture from chaos to confidence.
Prerequisites
Section titled “Prerequisites”Before starting this module:
- GitOps Discipline — GitOps principles and practices
- Kubernetes basics (Deployments, Services, Namespaces)
- Git fundamentals
- kubectl experience
What You’ll Be Able to Do
Section titled “What You’ll Be Able to Do”After completing this module, you will be able to:
- Deploy ArgoCD and configure Applications that sync Kubernetes manifests from Git repositories
- Implement multi-cluster GitOps with ArgoCD ApplicationSets and cluster generators
- Configure sync policies, health checks, and automated rollback strategies for production deployments
- Secure ArgoCD with RBAC, SSO integration, and project-scoped access controls
Why This Module Matters
Section titled “Why This Module Matters”ArgoCD is the most popular GitOps tool in the Kubernetes ecosystem. It watches Git repositories and automatically syncs your cluster state to match what’s defined in version control. No more kubectl apply from laptops—every change is auditable, reviewable, and reversible.
Understanding ArgoCD isn’t just about knowing the tool—it’s about adopting a deployment philosophy that eliminates configuration drift and makes rollbacks trivial.
Did You Know?
Section titled “Did You Know?”- ArgoCD syncs over 1 million applications in production—it’s used by Intuit (its creator), Tesla, NVIDIA, and thousands of companies
- The name “Argo” comes from Greek mythology—the ship that carried Jason and the Argonauts, fitting for a tool that “navigates” deployments
- ArgoCD was originally built for Intuit’s 150+ Kubernetes clusters—they needed a way to manage deployments at scale without tribal knowledge
- ArgoCD supports 50+ config management tools—Helm, Kustomize, Jsonnet, plain YAML, and custom plugins
ArgoCD Architecture
Section titled “ArgoCD Architecture”┌─────────────────────────────────────────────────────────────────┐│ ARGOCD ARCHITECTURE │├─────────────────────────────────────────────────────────────────┤│ ││ GIT REPOSITORY ││ ┌──────────────────────────────────────────────────────────┐ ││ │ apps/ │ ││ │ ├── deployment.yaml │ ││ │ ├── service.yaml │ ││ │ └── configmap.yaml │ ││ └────────────────────────────┬─────────────────────────────┘ ││ │ ││ │ Watch + Fetch ││ ▼ ││ ┌──────────────────────────────────────────────────────────┐ ││ │ ARGOCD SERVER │ ││ │ │ ││ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ ││ │ │ API Server │ │ Repo Server │ │ Application │ │ ││ │ │ │ │ │ │ Controller │ │ ││ │ │ • UI/CLI │ │ • Clone │ │ │ │ ││ │ │ • Auth │ │ • Render │ │ • Watch │ │ ││ │ │ • RBAC │ │ • Cache │ │ • Sync │ │ ││ │ └─────────────┘ └─────────────┘ └──────┬──────┘ │ ││ │ │ │ ││ └───────────────────────────────────────────┼──────────────┘ ││ │ ││ │ Apply ││ ▼ ││ ┌──────────────────────────────────────────────────────────┐ ││ │ KUBERNETES CLUSTER │ ││ │ │ ││ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ ││ │ │ Deploy │ │ Service │ │ Config │ │ Secret │ │ ││ │ │ │ │ │ │ Map │ │ │ │ ││ │ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │ ││ └──────────────────────────────────────────────────────────┘ ││ │└─────────────────────────────────────────────────────────────────┘Core Components
Section titled “Core Components”| Component | Purpose |
|---|---|
| API Server | Serves UI, CLI, RBAC, webhook endpoints |
| Repo Server | Clones repos, renders manifests (Helm/Kustomize) |
| Application Controller | Watches apps, detects drift, triggers sync |
| Dex | OIDC provider for SSO integration |
| Redis | Caching for repo server performance |
Installing ArgoCD
Section titled “Installing ArgoCD”Quick Install
Section titled “Quick Install”# Create namespacekubectl create namespace argocd
# Install ArgoCDkubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# Wait for podskubectl -n argocd wait --for=condition=ready pod -l app.kubernetes.io/name=argocd-server --timeout=120s
# Get initial admin passwordkubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -decho
# Port forward to access UIkubectl -n argocd port-forward svc/argocd-server 8080:443 &Production Install with Helm
Section titled “Production Install with Helm”helm repo add argo https://argoproj.github.io/argo-helmhelm repo update
helm install argocd argo/argo-cd \ --namespace argocd \ --create-namespace \ --set server.replicas=2 \ --set controller.replicas=2 \ --set repoServer.replicas=2 \ --set redis.enabled=true \ --set server.ingress.enabled=true \ --set server.ingress.hosts[0]=argocd.example.comArgoCD CLI
Section titled “ArgoCD CLI”# Install CLIbrew install argocd # macOS# orcurl -sSL -o argocd https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64chmod +x argocd && sudo mv argocd /usr/local/bin/
# Loginargocd login localhost:8080 --username admin --password <password> --insecure
# Add cluster (if managing external clusters)argocd cluster add my-cluster-contextApplications
Section titled “Applications”Basic Application
Section titled “Basic Application”apiVersion: argoproj.io/v1alpha1kind: Applicationmetadata: name: my-app namespace: argocdspec: project: default
source: repoURL: https://github.com/org/app-manifests.git targetRevision: HEAD path: apps/my-app
destination: server: https://kubernetes.default.svc namespace: my-app
syncPolicy: automated: prune: true # Delete resources removed from Git selfHeal: true # Revert manual changes syncOptions: - CreateNamespace=trueApplication with Helm
Section titled “Application with Helm”apiVersion: argoproj.io/v1alpha1kind: Applicationmetadata: name: nginx namespace: argocdspec: project: default
source: repoURL: https://charts.bitnami.com/bitnami chart: nginx targetRevision: 15.4.0 helm: releaseName: nginx values: | replicaCount: 3 service: type: ClusterIP
# Or reference values file from Git # valueFiles: # - values-production.yaml
# Or set individual parameters # parameters: # - name: replicaCount # value: "3"
destination: server: https://kubernetes.default.svc namespace: nginxApplication with Kustomize
Section titled “Application with Kustomize”apiVersion: argoproj.io/v1alpha1kind: Applicationmetadata: name: app-production namespace: argocdspec: project: default
source: repoURL: https://github.com/org/app.git targetRevision: HEAD path: overlays/production
# Kustomize-specific options kustomize: images: - myapp=myregistry/myapp:v2.0.0 namePrefix: prod- commonLabels: env: production
destination: server: https://kubernetes.default.svc namespace: productionSync Strategies
Section titled “Sync Strategies”Sync Waves and Hooks
Section titled “Sync Waves and Hooks”# Sync waves: Control order of resource creationapiVersion: v1kind: Namespacemetadata: name: my-app annotations: argocd.argoproj.io/sync-wave: "-1" # Create first---apiVersion: v1kind: ConfigMapmetadata: name: config annotations: argocd.argoproj.io/sync-wave: "0" # Create second---apiVersion: apps/v1kind: Deploymentmetadata: name: my-app annotations: argocd.argoproj.io/sync-wave: "1" # Create thirdResource Hooks
Section titled “Resource Hooks”# Pre-sync hook: Run before syncapiVersion: batch/v1kind: Jobmetadata: name: db-migrate annotations: argocd.argoproj.io/hook: PreSync argocd.argoproj.io/hook-delete-policy: HookSucceededspec: template: spec: containers: - name: migrate image: myapp:latest command: ["./migrate.sh"] restartPolicy: Never---# Post-sync hook: Run after syncapiVersion: batch/v1kind: Jobmetadata: name: notify-slack annotations: argocd.argoproj.io/hook: PostSync argocd.argoproj.io/hook-delete-policy: HookSucceededspec: template: spec: containers: - name: notify image: curlimages/curl command: - curl - -X - POST - $(SLACK_WEBHOOK) - -d - '{"text":"Deployment complete!"}' restartPolicy: NeverHook Types
Section titled “Hook Types”| Hook | When It Runs |
|---|---|
PreSync | Before sync begins |
Sync | During sync (with manifests) |
PostSync | After all Sync hooks complete |
SyncFail | When sync fails |
Skip | Skip applying this resource |
App of Apps Pattern
Section titled “App of Apps Pattern”Why App of Apps?
Section titled “Why App of Apps?”MANAGING 50 APPLICATIONS:
Without App of Apps: With App of Apps:─────────────────────────────────────────────────────────────────
argocd/ argocd/├── app1.yaml └── root-app.yaml ◀── ONE FILE├── app2.yaml├── app3.yaml apps/├── ... ├── app1/└── app50.yaml │ └── application.yaml ├── app2/50 Application CRs to manage │ └── application.yaml └── ...
Problem: How do you deploy ▲the Application CRs themselves? │ Root app watches this directoryImplementing App of Apps
Section titled “Implementing App of Apps”# root-app.yaml - The "app of apps"apiVersion: argoproj.io/v1alpha1kind: Applicationmetadata: name: root namespace: argocdspec: project: default source: repoURL: https://github.com/org/argocd-apps.git targetRevision: HEAD path: apps destination: server: https://kubernetes.default.svc namespace: argocd syncPolicy: automated: prune: true selfHeal: true# Repository structureargocd-apps/├── apps/│ ├── cert-manager/│ │ └── application.yaml│ ├── ingress-nginx/│ │ └── application.yaml│ ├── monitoring/│ │ └── application.yaml│ └── my-apps/│ ├── app1.yaml│ ├── app2.yaml│ └── app3.yaml└── root-app.yamlapiVersion: argoproj.io/v1alpha1kind: Applicationmetadata: name: cert-manager namespace: argocd annotations: argocd.argoproj.io/sync-wave: "-2" # Install earlyspec: project: default source: repoURL: https://charts.jetstack.io chart: cert-manager targetRevision: v1.13.0 helm: values: | installCRDs: true destination: server: https://kubernetes.default.svc namespace: cert-managerApplicationSets
Section titled “ApplicationSets”Template-Based Application Generation
Section titled “Template-Based Application Generation”# Generate apps from Git directoriesapiVersion: argoproj.io/v1alpha1kind: ApplicationSetmetadata: name: cluster-addons namespace: argocdspec: generators: - git: repoURL: https://github.com/org/cluster-addons.git revision: HEAD directories: - path: addons/*
template: metadata: name: '{{path.basename}}' spec: project: default source: repoURL: https://github.com/org/cluster-addons.git targetRevision: HEAD path: '{{path}}' destination: server: https://kubernetes.default.svc namespace: '{{path.basename}}'Multi-Cluster Deployment
Section titled “Multi-Cluster Deployment”# Deploy to multiple clustersapiVersion: argoproj.io/v1alpha1kind: ApplicationSetmetadata: name: my-app namespace: argocdspec: generators: - list: elements: - cluster: production url: https://prod.k8s.example.com values: replicas: "5" - cluster: staging url: https://staging.k8s.example.com values: replicas: "2"
template: metadata: name: 'my-app-{{cluster}}' spec: project: default source: repoURL: https://github.com/org/my-app.git targetRevision: HEAD path: deploy helm: parameters: - name: replicas value: '{{values.replicas}}' destination: server: '{{url}}' namespace: my-appGenerator Types
Section titled “Generator Types”| Generator | Use Case |
|---|---|
list | Static list of elements |
clusters | All registered ArgoCD clusters |
git | Directories or files in a Git repo |
matrix | Combine two generators |
merge | Merge multiple generators |
pullRequest | GitHub/GitLab PRs for preview environments |
Projects and RBAC
Section titled “Projects and RBAC”ArgoCD Projects
Section titled “ArgoCD Projects”apiVersion: argoproj.io/v1alpha1kind: AppProjectmetadata: name: team-a namespace: argocdspec: description: Team A's applications
# Allowed source repos sourceRepos: - https://github.com/org/team-a-* - https://charts.bitnami.com/bitnami
# Allowed destination clusters/namespaces destinations: - namespace: team-a-* server: https://kubernetes.default.svc - namespace: '*' server: https://staging.example.com
# Allowed resource kinds clusterResourceWhitelist: - group: '' kind: Namespace namespaceResourceWhitelist: - group: '*' kind: '*'
# Deny specific resources namespaceResourceBlacklist: - group: '' kind: Secret # Can't create secrets directly
# Roles for this project roles: - name: developer description: Can sync applications policies: - p, proj:team-a:developer, applications, sync, team-a/*, allow - p, proj:team-a:developer, applications, get, team-a/*, allow groups: - team-a-developers # OIDC groupRBAC Policies
Section titled “RBAC Policies”# argocd-rbac-cm ConfigMapapiVersion: v1kind: ConfigMapmetadata: name: argocd-rbac-cm namespace: argocddata: policy.default: role:readonly
policy.csv: | # Admin: Full access g, admins, role:admin
# Developer: Sync and view p, role:developer, applications, get, */*, allow p, role:developer, applications, sync, */*, allow p, role:developer, logs, get, */*, allow
# Viewer: Read-only p, role:viewer, applications, get, */*, allow p, role:viewer, projects, get, *, allow
# Map groups to roles g, developers, role:developer g, viewers, role:viewer
scopes: '[groups]'Multi-Tenancy
Section titled “Multi-Tenancy”Namespace Isolation
Section titled “Namespace Isolation”# Restrict team to their namespacesapiVersion: argoproj.io/v1alpha1kind: AppProjectmetadata: name: team-payments namespace: argocdspec: destinations: # Only these namespaces - namespace: payments-* server: https://kubernetes.default.svc
# Must use these labels clusterResourceWhitelist: [] # No cluster resources
sourceRepos: - https://github.com/company/payments-*
# Enforce resource quotas via sync waves orphanedResources: warn: trueSoft Multi-Tenancy Pattern
Section titled “Soft Multi-Tenancy Pattern”┌─────────────────────────────────────────────────────────────────┐│ MULTI-TENANT ARGOCD │├─────────────────────────────────────────────────────────────────┤│ ││ TEAM A TEAM B ││ ┌────────────────────┐ ┌────────────────────┐ ││ │ Project: team-a │ │ Project: team-b │ ││ │ │ │ │ ││ │ Repos: org/team-a-*│ │ Repos: org/team-b-*│ ││ │ NS: team-a-* │ │ NS: team-b-* │ ││ └─────────┬──────────┘ └─────────┬──────────┘ ││ │ │ ││ ▼ ▼ ││ ┌────────────────────┐ ┌────────────────────┐ ││ │ team-a-production │ │ team-b-production │ ││ │ team-a-staging │ │ team-b-staging │ ││ └────────────────────┘ └────────────────────┘ ││ ││ SHARED ARGOCD INSTANCE ││ • SSO via OIDC (groups → project roles) ││ • Audit logging enabled ││ • Resource quotas per project ││ │└─────────────────────────────────────────────────────────────────┘Common Mistakes
Section titled “Common Mistakes”| Mistake | Why It’s Bad | Better Approach |
|---|---|---|
| Secrets in Git | Exposed credentials | Use External Secrets, Sealed Secrets, or Vault |
| No sync waves | Resources created in wrong order | Use sync-wave annotations for dependencies |
| Ignoring prune | Orphaned resources accumulate | Enable prune: true or manage orphaned resources |
| Manual kubectl changes | Drift from Git source | Enable selfHeal: true to revert changes |
| No projects | No isolation between teams | Create projects per team with RBAC |
| Hardcoded image tags | Can’t track what’s deployed | Use image updater or Git automation |
War Story: The $1.7 Million Git Merge
Section titled “War Story: The $1.7 Million Git Merge”┌─────────────────────────────────────────────────────────────────┐│ THE $1.7 MILLION GIT MERGE ││ ───────────────────────────────────────────────────────────────││ Company: B2B SaaS platform (500+ enterprise customers) ││ Stack: 127 microservices, 3 clusters, ArgoCD managed ││ The disaster: One merge, 47 services deleted, 6 hours down │└─────────────────────────────────────────────────────────────────┘Day 0 - The Merge
A developer was cleaning up the repository. “Let’s remove these old deployment files that are no longer needed.” He identified 47 services in the deprecated/ folder and deleted them. The PR passed code review—reviewers saw only file deletions, nothing alarming.
But there was a problem: the root ArgoCD Application had prune: true enabled. And the “deprecated” folder? It wasn’t deprecated at all. A naming refactor months earlier had moved services there, but they were still in production.
THE MERGE TIMELINE─────────────────────────────────────────────────────────────────09:14 AM PR merged to main09:14 AM ArgoCD detected change (30-second sync)09:15 AM ArgoCD synced: 47 services deleted from cluster09:17 AM First customer reports: "API returning 503"09:22 AM PagerDuty: 2,847 alerts in 5 minutes09:25 AM Engineering all-hands: "What happened?!"09:45 AM Root cause identified: services deleted by GitOps10:00 AM Git revert pushed to main10:02 AM ArgoCD synced: services recreating10:15 AM Database connection pools exhausted (cold start storm)11:00 AM Services recovering, still degraded15:00 PM Full recovery confirmedThe Fallout
INCIDENT IMPACT ASSESSMENT─────────────────────────────────────────────────────────────────Downtime duration: 5 hours 45 minutesServices affected: 47 of 127 (37%)Customers impacted: 312 enterprise accountsSLA violations: 89 customers (99.9% SLA)
Financial Impact:- SLA credit payouts: $847,000- Lost transactions: $523,000- Emergency response: $67,000 (overtime, contractors)- Customer churn (30d): $312,000 (7 accounts)
Total quantifiable cost: $1,749,000Why ArgoCD “Worked Correctly”
ArgoCD did exactly what it was configured to do:
- Git is the source of truth
- Files were deleted from Git
prune: truewas enabled- ArgoCD deleted resources not in Git
The tool wasn’t broken—the process was.
The Fix: Defense in Depth
# 1. Protect critical namespaces with finalizersapiVersion: argoproj.io/v1alpha1kind: Applicationmetadata: name: payment-service finalizers: - resources-finalizer.argocd.argoproj.io annotations: # Require manual deletion, never auto-prune argocd.argoproj.io/sync-options: Prune=false
# 2. Warn before pruningspec: syncPolicy: automated: prune: false # Changed from true! selfHeal: true # Enable orphan warnings instead of auto-delete orphanedResources: warn: true# 3. CODEOWNERS protection for critical paths# .github/CODEOWNERS/apps/production/** @platform-team @security-team/infrastructure/** @platform-teamThe Cultural Change
After the incident, the team implemented:
- Prune disabled by default: Services opt-in to pruning with explicit annotation
- Two-person review for deletions: Any PR that removes files requires platform team approval
- Staging sync first: Production ArgoCD syncs only after 1-hour staging bake time
- Sync windows: Critical services can only sync during business hours
Key Lessons
prune: trueis a loaded gun: Only enable for namespaces you’re willing to lose- Git history is your backup: But recovery requires understanding what ArgoCD will do
- Review deletions carefully: “Removing old files” PRs need scrutiny
- Staging isn’t optional: If ArgoCD would destroy staging, it’ll destroy production
- GitOps amplifies mistakes: The same property that makes recovery fast makes destruction fast
Question 1
Section titled “Question 1”What’s the difference between selfHeal and prune in ArgoCD sync policy?
Show Answer
selfHeal: Reverts manual changes made to the cluster that differ from Git. If someone runs kubectl edit deployment and changes replicas, ArgoCD will change it back.
prune: Deletes resources from the cluster that no longer exist in Git. If you remove a ConfigMap from your manifests, ArgoCD will delete it from the cluster.
Both can be dangerous if misconfigured:
selfHealcan undo emergency fixes (disable before hotfixes)prunecan delete stateful data (protect PVCs with annotations)
Question 2
Section titled “Question 2”You have 5 services that must be deployed in order: Namespace → ConfigMap → Secret → Deployment → Ingress. How do you ensure this order?
Show Answer
Use sync waves with annotations:
metadata: annotations: argocd.argoproj.io/sync-wave: "-2"
# configmap.yamlmetadata: annotations: argocd.argoproj.io/sync-wave: "-1"
# secret.yamlmetadata: annotations: argocd.argoproj.io/sync-wave: "0"
# deployment.yamlmetadata: annotations: argocd.argoproj.io/sync-wave: "1"
# ingress.yamlmetadata: annotations: argocd.argoproj.io/sync-wave: "2"Lower numbers sync first. ArgoCD waits for each wave’s resources to be healthy before proceeding.
Question 3
Section titled “Question 3”How would you deploy the same application to 10 clusters with different configurations per cluster?
Show Answer
Use an ApplicationSet with a list generator:
apiVersion: argoproj.io/v1alpha1kind: ApplicationSetmetadata: name: my-appspec: generators: - list: elements: - cluster: prod-us url: https://prod-us.example.com replicas: "10" region: us-east-1 - cluster: prod-eu url: https://prod-eu.example.com replicas: "5" region: eu-west-1 # ... 8 more clusters
template: metadata: name: 'my-app-{{cluster}}' spec: source: repoURL: https://github.com/org/my-app.git path: deploy helm: parameters: - name: replicas value: '{{replicas}}' - name: region value: '{{region}}' destination: server: '{{url}}' namespace: my-appFor dynamic cluster lists, use the clusters generator with labels.
Question 4
Section titled “Question 4”Your team accidentally pushed a broken config to Git and ArgoCD deployed it. How do you roll back?
Show Answer
Several options:
-
Git revert (recommended):
Terminal window git revert HEADgit pushArgoCD syncs the reverted state automatically.
-
ArgoCD rollback:
Terminal window argocd app rollback my-app --revision 5This syncs to a previous Git commit. Note: If auto-sync is enabled, it will re-sync to HEAD.
-
Disable auto-sync, fix, re-enable:
Terminal window argocd app set my-app --sync-policy none# Fix the issue in Gitargocd app sync my-appargocd app set my-app --sync-policy automated
Git revert is preferred because it maintains the audit trail and works with any sync policy.
Question 5
Section titled “Question 5”You’re managing 150 applications across 5 clusters. Using individual Application CRs is becoming unwieldy. What ArgoCD pattern would you use?
Show Answer
Use ApplicationSets with multiple generators:
apiVersion: argoproj.io/v1alpha1kind: ApplicationSetmetadata: name: cluster-appsspec: generators: # Matrix: Combine clusters × apps - matrix: generators: - clusters: {} # All registered clusters - git: repoURL: https://github.com/org/apps.git revision: HEAD directories: - path: apps/*
template: metadata: name: '{{name}}-{{path.basename}}' spec: project: default source: repoURL: https://github.com/org/apps.git path: '{{path}}' destination: server: '{{server}}' namespace: '{{path.basename}}'This generates:
- 5 clusters × 30 apps = 150 Applications from ONE ApplicationSet
- Adding a cluster automatically deploys all apps
- Adding an app automatically deploys to all clusters
Question 6
Section titled “Question 6”An application is showing “OutOfSync” status even though the Git source hasn’t changed. What are the common causes and how do you debug?
Show Answer
Common causes:
- Defaulted fields: Kubernetes API adds defaults that weren’t in your manifest
- Mutations by controllers: Admission webhooks or operators modify resources
- Immutable fields: Some fields can’t be changed after creation
- Annotation drift: Timestamps or hash annotations added by other tools
Debug steps:
# 1. View the diffargocd app diff my-app
# 2. Check what ArgoCD seesargocd app get my-app --show-params
# 3. View raw manifestsargocd app manifests my-app --source liveargocd app manifests my-app --source git
# 4. Compare in UI# ArgoCD UI shows side-by-side diff
# 5. If acceptable drift, ignore specific fieldsFix with ignore differences:
spec: ignoreDifferences: - group: apps kind: Deployment jsonPointers: - /spec/replicas # Ignore HPA-managed replicas - group: "" kind: Service jqPathExpressions: - .metadata.annotations["kubectl.kubernetes.io/last-applied-configuration"]Question 7
Section titled “Question 7”You need to prevent Team A from deploying to Team B’s namespaces while sharing a single ArgoCD instance. How do you configure this?
Show Answer
Use AppProjects for namespace isolation:
# Team A projectapiVersion: argoproj.io/v1alpha1kind: AppProjectmetadata: name: team-a namespace: argocdspec: description: Team A applications
# Can only deploy to team-a namespaces destinations: - namespace: team-a-* server: https://kubernetes.default.svc
# Can only use team-a repos sourceRepos: - https://github.com/org/team-a-*
# No cluster-wide resources clusterResourceWhitelist: []
# Map OIDC group to project roles: - name: developer policies: - p, proj:team-a:developer, applications, *, team-a/*, allow groups: - team-a-developers # OIDC groupRBAC enforcement:
# argocd-rbac-cm ConfigMapdata: policy.csv: | # Team A can only access team-a project p, role:team-a-dev, applications, *, team-a/*, allow p, role:team-a-dev, logs, get, team-a/*, allow g, team-a-developers, role:team-a-dev
# Default: deny p, role:default, *, *, *, deny policy.default: role:readonlyQuestion 8
Section titled “Question 8”Calculate the resource requirements for ArgoCD managing 500 applications with 20 Git repositories, syncing every 3 minutes.
Show Answer
Calculation approach:
ARGOCD RESOURCE SIZING─────────────────────────────────────────────────────────────────Applications: 500Git repos: 20Sync interval: 3 minutesAverage manifests per app: 10
API Server:- Handles UI, CLI, API calls- Memory: ~200MB base + 1MB per 100 apps = 200 + 5 = 205MB- Replicas: 2 (HA) = 410MB total- CPU: 500m per replica
Repo Server:- Clones repos, renders manifests- Memory: ~100MB base + 50MB per repo = 100 + 1000 = 1.1GB- Clones cached, but 20 repos with activity = significant- Replicas: 2 (HA) = 2.2GB total- CPU: 1 core per replica (manifest rendering is CPU-intensive)
Application Controller:- Watches 500 apps, calculates diffs- Memory: ~500MB base + 2MB per app = 500 + 1000 = 1.5GB- Single instance (uses leader election)- CPU: 2 cores (continuous reconciliation)
Redis:- Caches repo contents, application state- Memory: 512MB-1GB depending on manifest sizes- Single instance (or Redis HA)
TOTAL ESTIMATE:─────────────────────────────────────────────────────────────────api-server: 2 × (500m CPU, 256MB) = 1 core, 512MBrepo-server: 2 × (1 core, 1.5GB) = 2 cores, 3GBcontroller: 1 × (2 cores, 2GB) = 2 cores, 2GBredis: 1 × (200m CPU, 1GB) = 200m, 1GB─────────────────────────────────────────────────────────────────Total: ~5 cores, ~6.5GB memory
Plus buffer for spikes: 8 cores, 10GB memory recommendedScaling tips:
- Increase repo-server replicas if manifest rendering is slow
- Use
--parallelism-limiton controller to prevent thundering herd - Consider sharding controller across clusters for >1000 apps
Hands-On Exercise
Section titled “Hands-On Exercise”Scenario: GitOps for a Multi-Environment Application
Section titled “Scenario: GitOps for a Multi-Environment Application”Deploy an application to staging and production with ArgoCD, using different configurations per environment.
# Create kind clusterkind create cluster --name argocd-lab
# Install ArgoCDkubectl create namespace argocdkubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# Wait for podskubectl -n argocd wait --for=condition=ready pod -l app.kubernetes.io/name=argocd-server --timeout=120s
# Get passwordARGO_PWD=$(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d)echo "ArgoCD password: $ARGO_PWD"
# Port forwardkubectl -n argocd port-forward svc/argocd-server 8080:443 &Create Git Repository Structure
Section titled “Create Git Repository Structure”# Create local directory structuremkdir -p argocd-lab/{base,overlays/{staging,production},apps}
# Base kustomizationcat > argocd-lab/base/deployment.yaml << 'EOF'apiVersion: apps/v1kind: Deploymentmetadata: name: demo-appspec: replicas: 1 selector: matchLabels: app: demo template: metadata: labels: app: demo spec: containers: - name: app image: nginx:1.25 ports: - containerPort: 80 resources: requests: cpu: 10m memory: 32MiEOF
cat > argocd-lab/base/service.yaml << 'EOF'apiVersion: v1kind: Servicemetadata: name: demo-appspec: selector: app: demo ports: - port: 80EOF
cat > argocd-lab/base/kustomization.yaml << 'EOF'apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationresources: - deployment.yaml - service.yamlEOF
# Staging overlaycat > argocd-lab/overlays/staging/kustomization.yaml << 'EOF'apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationnamespace: stagingnamePrefix: staging-resources: - ../../basepatches: - patch: |- - op: replace path: /spec/replicas value: 1 target: kind: Deployment name: demo-appEOF
# Production overlaycat > argocd-lab/overlays/production/kustomization.yaml << 'EOF'apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationnamespace: productionnamePrefix: prod-resources: - ../../basepatches: - patch: |- - op: replace path: /spec/replicas value: 3 target: kind: Deployment name: demo-appEOFCreate ArgoCD Applications
Section titled “Create ArgoCD Applications”Since we’re using local files, we’ll apply manifests directly:
# Create namespaceskubectl create namespace stagingkubectl create namespace production
# Apply manifestskubectl apply -k argocd-lab/overlays/staging/kubectl apply -k argocd-lab/overlays/production/For a real GitOps setup, create Application resources pointing to your Git repo:
apiVersion: argoproj.io/v1alpha1kind: Applicationmetadata: name: demo-staging namespace: argocdspec: project: default source: repoURL: https://github.com/YOUR_ORG/argocd-lab.git targetRevision: HEAD path: overlays/staging destination: server: https://kubernetes.default.svc namespace: staging syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=trueVerify Deployment
Section titled “Verify Deployment”# Check staging (1 replica)kubectl -n staging get pods
# Check production (3 replicas)kubectl -n production get pods
# Access ArgoCD UIopen https://localhost:8080# Login: admin / $ARGO_PWDSuccess Criteria
Section titled “Success Criteria”- ArgoCD is running and accessible
- Can view applications in the UI
- Staging has 1 replica
- Production has 3 replicas
- Understand Application and Kustomize structure
Cleanup
Section titled “Cleanup”kind delete cluster --name argocd-labrm -rf argocd-labKey Takeaways
Section titled “Key Takeaways”Before moving on, ensure you can:
- Explain ArgoCD’s architecture: API Server, Repo Server, Application Controller
- Install ArgoCD and access the UI via port-forward or ingress
- Create Application CRs pointing to Git repos with Helm, Kustomize, or plain YAML
- Configure sync policies:
automated,prune, andselfHealwith appropriate safeguards - Use sync waves and hooks to control deployment order and run pre/post-sync jobs
- Implement App of Apps pattern for managing multiple applications
- Use ApplicationSets to generate applications from templates and generators
- Configure AppProjects and RBAC for multi-tenant isolation
- Troubleshoot sync failures: read diffs, check logs, use
ignoreDifferences - Roll back deployments using Git revert or ArgoCD CLI
Next Module
Section titled “Next Module”Continue to Module 2.2: Argo Rollouts where we’ll implement progressive delivery with canary and blue-green deployments.
“The best deployment is the one you don’t have to think about. GitOps with ArgoCD makes that possible.”