Skip to content

Module 3.5: Azure DNS & Traffic Manager

Complexity: [MEDIUM] | Time to Complete: 1.5h | Prerequisites: Module 3.2 (Virtual Networks)

After completing this module, you will be able to:

  • Configure Azure DNS zones with record sets for public-facing and private VNet-linked name resolution
  • Implement Traffic Manager profiles with priority, weighted, and performance routing for multi-region failover
  • Deploy Azure Private DNS zones for VNet-internal service discovery across peered virtual networks
  • Design DNS architectures combining Azure DNS, Traffic Manager, and Front Door for global traffic distribution

In October 2021, a global logistics company migrated their customer-facing portal from on-premises to Azure. They deployed the application across East US and West Europe regions for redundancy. On launch day, everything worked---until the East US deployment experienced a database connection pool exhaustion at peak hours. Instead of seamlessly routing users to the healthy West Europe deployment, all users saw errors. The problem was simple: they had configured Azure DNS with A records pointing directly to the East US public IP. There was no traffic routing layer to detect the failure and redirect traffic. Adding Azure Traffic Manager with health probes took 15 minutes to configure, but the 3-hour outage had already cost them their biggest customer---a shipping company that processed 40,000 packages per day through the portal. That single customer represented $2.4 million in annual revenue.

DNS is the invisible infrastructure that underpins every internet interaction. When it works, nobody thinks about it. When it fails, nothing works. In Azure, DNS is not just about resolving names to IP addresses---it is a critical component of high availability, traffic routing, and hybrid cloud architecture. Azure DNS handles public-facing domain resolution, Private DNS Zones handle name resolution within your virtual networks, and Traffic Manager uses DNS-based routing to distribute traffic across regions and endpoints.

In this module, you will learn how Azure DNS zones work for both public and private scenarios, how Traffic Manager routes traffic using different algorithms, and how Azure Front Door provides a modern alternative with layer-7 capabilities. By the end, you will understand how to design a DNS architecture that keeps your applications reachable even when entire regions fail.


Azure DNS allows you to host your DNS zones on Azure’s global anycast network of name servers. When you host your zone in Azure DNS, your DNS records are served from Microsoft’s worldwide network of DNS servers, providing low latency and high availability.

A DNS zone is a container for all the DNS records for a specific domain. When you create a zone for example.com in Azure DNS, Azure assigns four name servers (in the format ns1-XX.azure-dns.com, ns2-XX.azure-dns.net, ns3-XX.azure-dns.org, ns4-XX.azure-dns.info).

Terminal window
# Create a DNS zone
az network dns zone create \
--resource-group myRG \
--name example.com
# View the assigned name servers
az network dns zone show \
--resource-group myRG \
--name example.com \
--query nameServers -o tsv

After creating the zone, you must update your domain registrar’s NS records to point to the Azure name servers. Until you do this, DNS queries for your domain will not reach Azure.

Terminal window
# A record: Maps a name to an IPv4 address
az network dns record-set a add-record \
--resource-group myRG \
--zone-name example.com \
--record-set-name www \
--ipv4-address 20.50.100.150
# AAAA record: Maps a name to an IPv6 address
az network dns record-set aaaa add-record \
--resource-group myRG \
--zone-name example.com \
--record-set-name www \
--ipv6-address 2603:1030:800:5::1
# CNAME record: Maps a name to another name (alias)
az network dns record-set cname set-record \
--resource-group myRG \
--zone-name example.com \
--record-set-name blog \
--cname blog.wordpress.com
# MX record: Mail exchange
az network dns record-set mx add-record \
--resource-group myRG \
--zone-name example.com \
--record-set-name "@" \
--exchange mail.example.com \
--preference 10
# TXT record: Arbitrary text (SPF, DKIM, verification)
az network dns record-set txt add-record \
--resource-group myRG \
--zone-name example.com \
--record-set-name "@" \
--value "v=spf1 include:spf.protection.outlook.com -all"
# List all records in a zone
az network dns record-set list \
--resource-group myRG \
--zone-name example.com -o table

Azure DNS supports alias records, which point directly to an Azure resource (like a Load Balancer, Traffic Manager profile, or CDN endpoint) instead of an IP address. The key advantage: when the resource’s IP changes, the DNS record updates automatically.

Terminal window
# Create an alias record pointing to a Load Balancer public IP
LB_PIP_ID=$(az network public-ip show -g myRG -n web-lb-pip --query id -o tsv)
az network dns record-set a create \
--resource-group myRG \
--zone-name example.com \
--name app \
--target-resource "$LB_PIP_ID"
flowchart TD
subgraph Traditional [Traditional A Record]
T_DNS[app.example.com] -->|A Record| T_IP[20.50.100.150]
T_Note[Static IP: Fails if Load Balancer IP changes] -.-> T_IP
end
subgraph Alias [Alias Record]
A_DNS[app.example.com] -->|Alias Record| A_Res[Azure Resource ID\nweb-lb-pip]
A_Res -.->|Azure DNS automatically\nresolves current IP| A_IP[Current IP]
end

Stop and think: Why does RFC 1034 prohibit CNAME records at the zone apex (e.g., example.com), and how does Azure DNS bypass this limitation with Alias records under the hood? What type of DNS record does the client actually receive when resolving an Alias at the apex?


Private DNS Zones provide name resolution within your Virtual Networks without exposing records to the public internet. This is essential for internal service discovery---your web servers need to find your database by name (db.internal.example.com), not by memorizing IP addresses that change when you redeploy.

flowchart TD
subgraph Zone [Private DNS Zone: internal.example.com]
Records[db → 10.0.2.10\ncache → 10.0.2.20\napi → 10.0.1.15]
end
Hub[hub-vnet] -- "Linked (Auto-registration ON)" --> Zone
Spoke1[spoke1-vnet] -- "Linked (Resolution ONLY)" --> Zone
Spoke2[spoke2-vnet] -- "Linked (Resolution ONLY)" --> Zone
HubVMs[VMs auto-register\nDNS names] -.-> Hub
SpokeVMs1[VMs can resolve\nbut do not register] -.-> Spoke1
SpokeVMs2[VMs can resolve\nbut do not register] -.-> Spoke2
Terminal window
# Create a private DNS zone
az network private-dns zone create \
--resource-group myRG \
--name internal.example.com
# Link the private DNS zone to a VNet (with auto-registration)
az network private-dns link vnet create \
--resource-group myRG \
--zone-name internal.example.com \
--name hub-link \
--virtual-network hub-vnet \
--registration-enabled true # VMs in this VNet auto-register
# Link to spoke VNets (resolution only, no auto-registration)
az network private-dns link vnet create \
--resource-group myRG \
--zone-name internal.example.com \
--name spoke1-link \
--virtual-network spoke1-vnet \
--registration-enabled false
# Manually add a record
az network private-dns record-set a add-record \
--resource-group myRG \
--zone-name internal.example.com \
--record-set-name db \
--ipv4-address 10.0.2.10
# List records in the private zone
az network private-dns record-set list \
--resource-group myRG \
--zone-name internal.example.com -o table

Auto-registration is a powerful feature: when enabled on a VNet link, every VM created in that VNet automatically gets a DNS record in the private zone. When the VM is deleted, the record is automatically removed. This eliminates the need to manually manage internal DNS records.

Pause and predict: You have a Private DNS Zone linked to a VNet with auto-registration enabled. You deploy a VM named database-primary. Later, an administrator logs into the VM’s guest OS (Windows or Linux) and manually changes its IP address. What happens to the DNS record in the Private DNS Zone, and why?

Private Endpoints are a mechanism to access Azure PaaS services (Storage, SQL, Key Vault, etc.) over a private IP address in your VNet instead of over the public internet. When you create a private endpoint, you need a Private DNS Zone to resolve the service’s FQDN to the private IP.

Terminal window
# Example: Private endpoint for a storage account
# Step 1: Create the private endpoint
az network private-endpoint create \
--resource-group myRG \
--name storage-pe \
--vnet-name hub-vnet \
--subnet private-endpoints \
--private-connection-resource-id "$STORAGE_ACCOUNT_ID" \
--group-id blob \
--connection-name storage-connection
# Step 2: Create the private DNS zone for blob storage
az network private-dns zone create \
--resource-group myRG \
--name privatelink.blob.core.windows.net
# Step 3: Link the DNS zone to your VNet
az network private-dns link vnet create \
--resource-group myRG \
--zone-name privatelink.blob.core.windows.net \
--name hub-dns-link \
--virtual-network hub-vnet \
--registration-enabled false
# Step 4: Create DNS zone group (auto-manages DNS records)
az network private-endpoint dns-zone-group create \
--resource-group myRG \
--endpoint-name storage-pe \
--name default \
--private-dns-zone "privatelink.blob.core.windows.net" \
--zone-name blob

After this setup, when a VM in hub-vnet resolves yourstorage.blob.core.windows.net, the response is the private IP of the private endpoint (e.g., 10.0.5.4) instead of the public IP. Traffic stays entirely within Azure’s backbone.


Azure Traffic Manager: DNS-Based Global Load Balancing

Section titled “Azure Traffic Manager: DNS-Based Global Load Balancing”

Traffic Manager is a DNS-based traffic routing service that distributes traffic across global endpoints. It works at the DNS layer (Layer 7 of DNS, technically)---when a client resolves your domain, Traffic Manager returns the IP of the most appropriate endpoint based on the routing method you configure.

sequenceDiagram
actor Client
participant TM as Traffic Manager
participant EUS as East US (20.50.100.1)
Client->>TM: 1. DNS Query (app.trafficmanager.net)
Note over TM: 2. Evaluates:<br/>- Health probes<br/>- Routing method<br/>- Priority
TM-->>Client: 3. Returns IP of best endpoint (20.50.100.1)
Client->>EUS: 4. HTTP/TCP traffic (Direct connection)
Note over Client,EUS: Traffic Manager is NOT in the data path

Critical insight: Traffic Manager is not a proxy or a load balancer. It only participates in the DNS resolution step. After that, the client connects directly to the endpoint. This means Traffic Manager cannot see HTTP headers, cannot terminate SSL, and cannot cache content. For those features, you need Azure Front Door.

MethodHow It RoutesBest For
PriorityAlways sends to highest-priority healthy endpointActive/passive failover
WeightedDistributes traffic by weight (e.g., 80/20)Canary deployments, A/B testing
PerformanceRoutes to the closest endpoint (by latency)Global apps needing low latency
GeographicRoutes based on the client’s geographic locationData sovereignty, regional compliance
MultiValueReturns multiple healthy IPs (client chooses)Increase availability with client-side retry
SubnetRoutes based on client’s source IP rangeVIP customers, partner-specific endpoints

Stop and think: A company uses Traffic Manager with Geographic routing to restrict data access: EU users are routed to Frankfurt, US users to Virginia. If the Virginia region suffers a total outage, what happens to the US traffic? Does it fail over to Frankfurt, or drop entirely?

Terminal window
# Create a Traffic Manager profile with Priority routing
az network traffic-manager profile create \
--resource-group myRG \
--name app-tm-profile \
--routing-method Priority \
--unique-dns-name app-kubedojo \
--ttl 30 \
--protocol HTTPS \
--port 443 \
--path "/health" \
--interval 10 \
--timeout 5 \
--max-failures 3
# Add primary endpoint (East US)
az network traffic-manager endpoint create \
--resource-group myRG \
--profile-name app-tm-profile \
--name eastus-endpoint \
--type azureEndpoints \
--target-resource-id "$EASTUS_PIP_ID" \
--priority 1 \
--endpoint-status Enabled
# Add secondary endpoint (West Europe)
az network traffic-manager endpoint create \
--resource-group myRG \
--profile-name app-tm-profile \
--name westeurope-endpoint \
--type azureEndpoints \
--target-resource-id "$WESTEUROPE_PIP_ID" \
--priority 2 \
--endpoint-status Enabled
# Check endpoint health status
az network traffic-manager endpoint list \
--resource-group myRG \
--profile-name app-tm-profile \
--type azureEndpoints \
--query '[].{Name:name, Status:endpointStatus, Monitor:endpointMonitorStatus, Priority:priority}' -o table
# Test DNS resolution
nslookup app-kubedojo.trafficmanager.net

Traffic Manager with Weighted Routing (Canary Deployments)

Section titled “Traffic Manager with Weighted Routing (Canary Deployments)”
Terminal window
# Create a profile for canary deployment
az network traffic-manager profile create \
--resource-group myRG \
--name canary-tm-profile \
--routing-method Weighted \
--unique-dns-name canary-kubedojo \
--ttl 10 \
--protocol HTTPS \
--port 443 \
--path "/health"
# Stable version gets 90% of traffic
az network traffic-manager endpoint create \
--resource-group myRG \
--profile-name canary-tm-profile \
--name stable \
--type externalEndpoints \
--target stable.example.com \
--weight 90
# Canary version gets 10% of traffic
az network traffic-manager endpoint create \
--resource-group myRG \
--profile-name canary-tm-profile \
--name canary \
--type externalEndpoints \
--target canary.example.com \
--weight 10

War Story: A retail company used Traffic Manager with Performance routing for their global storefront. During a product launch, their East US deployment became overloaded. Traffic Manager’s health probes detected the degradation and automatically started routing new DNS queries to West Europe. The failover happened transparently---customers experienced a brief increase in latency (transatlantic vs same-region) but zero downtime. The engineering team had 45 minutes of breathing room to scale up East US before most users even noticed the region switch.


Azure Front Door is a global, scalable entry point for web applications. Unlike Traffic Manager (DNS only), Front Door operates at Layer 7 (HTTP/HTTPS) and sits in the data path. It acts as a reverse proxy, providing SSL termination, caching, WAF, and intelligent routing.

flowchart TD
subgraph TM [Traffic Manager - DNS Layer]
direction LR
C1[Client] -- "1. DNS Query" --> T[Traffic Manager]
T -- "2. Returns IP" --> C1
C1 -- "3. Direct Connect\n(Not in data path)" --> O1[Origin Server]
end
subgraph FD [Azure Front Door - Layer 7]
direction LR
C2[Client] -- "HTTPS" --> F[Front Door PoP Edge\n- SSL Offload\n- WAF\n- Caching\n- Routing]
F -- "HTTPS\n(In data path)" --> O2[Origin Server]
end
FeatureTraffic ManagerAzure Front Door
LayerDNS (returns IP)HTTP/HTTPS (reverse proxy)
In data pathNoYes
SSL terminationNoYes
CachingNoYes (edge caching)
WAFNoYes (built-in)
URL path routingNoYes
Session affinityNo (DNS round-robin)Yes (cookie-based)
Health probesTCP, HTTP, HTTPSHTTP, HTTPS (with custom headers)
Protocol supportAny (TCP/UDP/HTTP)HTTP/HTTPS only
Cost~$0.36/million queries~$35/month + per-request
Failover speedDNS TTL dependent (30-300s)Near-instant (<30s)
Terminal window
# Create an Azure Front Door profile (Standard tier)
az afd profile create \
--resource-group myRG \
--profile-name app-frontdoor \
--sku Standard_AzureFrontDoor
# Add an endpoint
az afd endpoint create \
--resource-group myRG \
--profile-name app-frontdoor \
--endpoint-name app-endpoint \
--enabled-state Enabled
# Add an origin group (backend pool)
az afd origin-group create \
--resource-group myRG \
--profile-name app-frontdoor \
--origin-group-name app-origins \
--probe-request-type GET \
--probe-protocol Https \
--probe-path "/health" \
--probe-interval-in-seconds 10 \
--sample-size 4 \
--successful-samples-required 3
# Add origins (backends)
az afd origin create \
--resource-group myRG \
--profile-name app-frontdoor \
--origin-group-name app-origins \
--origin-name eastus-origin \
--host-name eastus-app.azurewebsites.net \
--origin-host-header eastus-app.azurewebsites.net \
--http-port 80 \
--https-port 443 \
--priority 1 \
--weight 1000
# Add a route
az afd route create \
--resource-group myRG \
--profile-name app-frontdoor \
--endpoint-name app-endpoint \
--route-name default-route \
--origin-group app-origins \
--supported-protocols Https \
--patterns-to-match "/*" \
--forwarding-protocol HttpsOnly

Use Traffic Manager when:

  • You need non-HTTP routing (TCP, UDP services)
  • You want the simplest, cheapest global routing
  • Your endpoints handle SSL themselves
  • You need Geographic routing for compliance

Use Azure Front Door when:

  • You need SSL termination at the edge
  • You want a Web Application Firewall (WAF)
  • You need edge caching for static content
  • You want sub-second failover (not DNS-TTL dependent)
  • You need URL-based routing (e.g., /api/* to one backend, /static/* to another)

  1. Azure DNS hosts over 100 million DNS zones as of 2024, making it one of the largest authoritative DNS providers in the world. Azure DNS uses anycast networking, meaning queries are automatically routed to the closest DNS server. The result is typical query latency under 20 milliseconds from anywhere on the planet.

  2. Traffic Manager health probes come from specific well-known IP ranges published by Microsoft. If your backend has IP-based firewall rules, you must whitelist these IPs or your health probes will fail and Traffic Manager will mark your endpoint as degraded. The IP ranges are published in the Azure IP Ranges JSON file, under the AzureTrafficManager service tag.

  3. Azure Front Door has over 192 edge locations (Points of Presence) across 109 metro areas worldwide as of 2025. When a user in Tokyo accesses your app through Front Door, the TLS handshake terminates at a Tokyo PoP. This reduces the round-trip time for the SSL negotiation from ~200ms (to a US backend) to ~5ms (to a local PoP). The PoP then maintains a persistent, optimized connection to your origin backend.

  4. Private DNS Zone auto-registration has a limit of one registration-enabled link per VNet. A VNet can be linked to multiple Private DNS Zones for resolution, but only one zone can have auto-registration enabled. This prevents conflicts where multiple zones try to register the same VM name. If you need records in multiple zones, use one zone for auto-registration and manually create records in the others.


MistakeWhy It HappensHow to Fix It
Forgetting to update NS records at the domain registrar after creating an Azure DNS zoneAzure creates the zone and records, but has no authority over the domain until NS records are delegatedAfter creating the zone, copy the four Azure NS records and update them at your domain registrar. Verify with nslookup -type=NS example.com.
Setting Traffic Manager TTL too high (300s default)Higher TTL reduces DNS query costsFor failover scenarios, set TTL to 10-30 seconds. High TTL means clients cache stale IPs and do not fail over for minutes after an endpoint goes down.
Using Traffic Manager when Front Door is more appropriateTraffic Manager is simpler and cheaper to set upIf you need SSL termination, WAF, caching, or sub-second failover, Front Door is worth the extra cost. Traffic Manager’s failover speed is limited by DNS TTL.
Not linking Private DNS Zones to all VNets that need resolutionOnly the initial VNet is linked during creationEvery VNet that needs to resolve private DNS names must be explicitly linked to the zone. Forgetting a spoke VNet means VMs in that spoke cannot resolve internal names.
Using CNAME records at the zone apex (e.g., example.com)RFC 1034 prohibits CNAME at the zone apex, but teams need it for services like Front DoorUse Azure DNS alias records instead. Alias records can point to Azure resources at the zone apex without violating the RFC.
Not configuring health probes on Traffic Manager endpointsEndpoints default to “Enabled” which means Traffic Manager assumes they are healthyAlways configure health probes with a meaningful path (like /health) that checks actual application readiness, not just that the server is responding.
Ignoring the DNS propagation delay when making changesDNS changes appear instant in the portalChanges propagate to the Azure DNS servers within 60 seconds, but clients and intermediate DNS resolvers may cache the old record for up to the TTL duration. Plan maintenance windows accordingly.
Creating separate private DNS zones per VNet instead of shared zonesTeams independently create zones with the same nameUse centralized Private DNS Zones linked to all VNets. If each team creates their own internal.company.com zone, records are fragmented and inconsistent.

1. Scenario: Your team is hosting an e-commerce platform behind an Azure Public Load Balancer. The security team mandates that the Load Balancer must be recreated monthly using infrastructure-as-code to ensure zero configuration drift. You need to map the apex domain (`shop.com`) to this Load Balancer. Why must you use an Azure DNS alias record instead of a standard A record or CNAME?

A standard A record requires a static IP; recreating the Load Balancer would assign a new public IP, causing downtime until the A record is manually updated. A CNAME record cannot be used at the zone apex (shop.com) due to RFC constraints. An Azure DNS alias record solves both problems: it maps directly to the Load Balancer’s Azure Resource ID, so it automatically tracks IP changes without manual intervention, and it is fully supported at the zone apex. When queried, Azure dynamically returns the current IP as a standard A record.

2. Scenario: You are designing a hub-and-spoke network architecture with one hub VNet (containing shared databases) and two spoke VNets (containing web APIs). You create a Private DNS Zone `internal.corp` to resolve internal hostnames. If you enable auto-registration for the hub VNet, how should you configure the links for the spoke VNets, and why?

You must link the Private DNS Zone to the spoke VNets with auto-registration disabled (resolution-only). Azure only allows one VNet to have auto-registration enabled per Private DNS Zone to prevent naming conflicts. By enabling auto-registration on the hub, the shared databases automatically register their DNS records. The resolution-only links on the spokes ensure the web APIs can successfully look up those database hostnames without attempting to register their own potentially conflicting names into the shared zone.

3. Scenario: During a Black Friday sale, your primary East US application crashes. Traffic Manager is configured with Priority routing (East US is Priority 1, West Europe is Priority 2) and a DNS TTL of 5 minutes (300 seconds). The health probe interval is 30 seconds. A customer in New York refreshed their browser exactly 10 seconds before the crash. How long might this customer experience downtime before being routed to West Europe, and why?

The customer could experience over 6 minutes of downtime. First, Traffic Manager must detect the failure, which takes up to 90 seconds (3 failed probes at 30-second intervals) before it updates its internal routing to point to West Europe. Second, because the DNS TTL is 300 seconds and the customer just resolved the name, their local machine or ISP DNS cache will hold onto the stale East US IP for another 4 minutes and 50 seconds. To minimize this, you must configure a lower DNS TTL (e.g., 30 seconds) and faster probe intervals.

4. Scenario: A financial startup is launching a global trading platform. They need to ensure that European traffic stays in Europe, all connections enforce TLS 1.3, static assets like charts are cached at edge locations, and any malicious SQL injection attempts are blocked before reaching the application servers. Why is Traffic Manager insufficient for this architecture, and what service must they use instead?

Traffic Manager operates strictly at the DNS layer (Layer 7 of DNS) and is not in the data path, meaning it cannot inspect or modify HTTP traffic. It cannot terminate TLS, cache content, or provide Web Application Firewall (WAF) protections against SQL injections. The startup must use Azure Front Door. Front Door acts as a Layer 7 global reverse proxy in the data path, terminating TLS at the edge, caching static assets at local Points of Presence (PoPs), and inspecting traffic with its built-in WAF before forwarding it to the backend.

5. Scenario: You provisioned an Azure SQL Database and secured it with a Private Endpoint, giving it an IP of `10.0.1.4` in your VNet. However, when your application tries to connect using `myserver.database.windows.net`, the connection times out because it's still trying to route over the public internet. What missing component is causing this, and how does it fix the problem?

The missing component is the integration between the Private Endpoint and an Azure Private DNS Zone. By default, the public FQDN of the Azure SQL Database (myserver.database.windows.net) continues to resolve to its public IP address on the internet. You must create a Private DNS Zone (e.g., privatelink.database.windows.net), link it to your VNet, and configure the Private Endpoint’s DNS Zone Group. This overrides the public DNS resolution for that specific FQDN within your VNet, seamlessly returning the private IP 10.0.1.4 so the application connects securely over the internal backbone.

6. Scenario: A gaming company uses Traffic Manager with Performance routing to connect players to the lowest-latency game server. After a server in Tokyo goes offline, players in Japan complain they cannot connect for several minutes, even though a backup server in Seoul is available. You investigate and find the profile has a TTL of 300 seconds. What two specific configuration changes must you make to guarantee failover happens in under 60 seconds?

First, you must reduce the DNS TTL from 300 seconds to a much lower value, such as 10 or 30 seconds, so client machines and ISP resolvers expire the stale IP address faster. Second, you must optimize the endpoint monitoring settings by reducing the probe interval (e.g., from 30 seconds to 10 seconds) and potentially lowering the tolerated number of failures (e.g., from 3 to 2). By combining a short TTL with aggressive health probing, Traffic Manager detects the Tokyo server failure faster and clients query for the new Seoul IP address almost immediately.


Hands-On Exercise: Public DNS Zone with Traffic Manager Failover

Section titled “Hands-On Exercise: Public DNS Zone with Traffic Manager Failover”

In this exercise, you will create a public DNS zone, set up a Traffic Manager profile with Priority routing and health probes, and simulate a failover.

Prerequisites: Azure CLI installed and authenticated. You do not need a real domain for this exercise---we will work entirely within the trafficmanager.net namespace.

Task 1: Create the Resource Group and Two Simulated Endpoints

Section titled “Task 1: Create the Resource Group and Two Simulated Endpoints”

We will use Azure Container Instances as lightweight web servers to act as our “regional endpoints.”

Terminal window
RG="kubedojo-dns-lab"
LOCATION_PRIMARY="eastus2"
LOCATION_SECONDARY="westeurope"
az group create --name "$RG" --location "$LOCATION_PRIMARY"
# Primary endpoint: a simple web server in East US 2
az container create \
--resource-group "$RG" \
--name primary-web \
--image mcr.microsoft.com/azuredocs/aci-helloworld \
--dns-name-label "kubedojo-primary-$(openssl rand -hex 4)" \
--location "$LOCATION_PRIMARY" \
--ports 80
# Secondary endpoint: a simple web server in West Europe
az container create \
--resource-group "$RG" \
--name secondary-web \
--image mcr.microsoft.com/azuredocs/aci-helloworld \
--dns-name-label "kubedojo-secondary-$(openssl rand -hex 4)" \
--location "$LOCATION_SECONDARY" \
--ports 80
# Get their FQDNs
PRIMARY_FQDN=$(az container show -g "$RG" -n primary-web --query ipAddress.fqdn -o tsv)
SECONDARY_FQDN=$(az container show -g "$RG" -n secondary-web --query ipAddress.fqdn -o tsv)
echo "Primary: $PRIMARY_FQDN"
echo "Secondary: $SECONDARY_FQDN"
Verify Task 1
Terminal window
curl -s "http://$PRIMARY_FQDN" | head -5
curl -s "http://$SECONDARY_FQDN" | head -5

Both should return HTML content.

Terminal window
TM_DNS="kubedojo-tm-$(openssl rand -hex 4)"
az network traffic-manager profile create \
--resource-group "$RG" \
--name app-tm \
--routing-method Priority \
--unique-dns-name "$TM_DNS" \
--ttl 10 \
--protocol HTTP \
--port 80 \
--path "/" \
--interval 10 \
--timeout 5 \
--max-failures 2
echo "Traffic Manager DNS: ${TM_DNS}.trafficmanager.net"
Verify Task 2
Terminal window
az network traffic-manager profile show -g "$RG" -n app-tm \
--query '{DNS:dnsConfig.fqdn, Routing:trafficRoutingMethod, TTL:dnsConfig.ttl}' -o table
Terminal window
# Add primary endpoint (priority 1)
az network traffic-manager endpoint create \
--resource-group "$RG" \
--profile-name app-tm \
--name primary \
--type externalEndpoints \
--target "$PRIMARY_FQDN" \
--priority 1 \
--endpoint-status Enabled
# Add secondary endpoint (priority 2)
az network traffic-manager endpoint create \
--resource-group "$RG" \
--profile-name app-tm \
--name secondary \
--type externalEndpoints \
--target "$SECONDARY_FQDN" \
--priority 2 \
--endpoint-status Enabled
Verify Task 3
Terminal window
az network traffic-manager endpoint list -g "$RG" --profile-name app-tm \
--type externalEndpoints \
--query '[].{Name:name, Target:target, Priority:priority, MonitorStatus:endpointMonitorStatus}' -o table

Both endpoints should show with their respective priorities. MonitorStatus may take a minute to populate.

Terminal window
# Resolve the Traffic Manager DNS name
nslookup "${TM_DNS}.trafficmanager.net"
# Access the app through Traffic Manager
curl -s "http://${TM_DNS}.trafficmanager.net" | head -5
# You should see the primary endpoint's response
Verify Task 4

The nslookup should resolve to the primary endpoint’s IP address. The curl should return the primary web server’s content. All traffic goes to priority 1 (primary) because both endpoints are healthy.

Terminal window
# Disable the primary endpoint (simulating a regional outage)
az network traffic-manager endpoint update \
--resource-group "$RG" \
--profile-name app-tm \
--name primary \
--type externalEndpoints \
--endpoint-status Disabled
# Wait for the change to propagate (TTL is 10 seconds)
echo "Waiting 15 seconds for DNS propagation..."
sleep 15
# Verify Traffic Manager now routes to secondary
nslookup "${TM_DNS}.trafficmanager.net"
curl -s "http://${TM_DNS}.trafficmanager.net" | head -5
# Check endpoint status
az network traffic-manager endpoint list -g "$RG" --profile-name app-tm \
--type externalEndpoints \
--query '[].{Name:name, Status:endpointStatus, MonitorStatus:endpointMonitorStatus}' -o table
Verify Task 5

After disabling the primary endpoint, the DNS resolution should now return the secondary endpoint’s IP address. The curl should return the secondary web server’s content. The endpoint list should show primary as Disabled and secondary as Enabled with Online monitor status.

Terminal window
# Re-enable the primary endpoint
az network traffic-manager endpoint update \
--resource-group "$RG" \
--profile-name app-tm \
--name primary \
--type externalEndpoints \
--endpoint-status Enabled
# Wait for propagation
sleep 15
# Verify traffic returns to primary
nslookup "${TM_DNS}.trafficmanager.net"
curl -s "http://${TM_DNS}.trafficmanager.net" | head -5
Verify Task 6

Traffic should return to the primary endpoint (priority 1) once it is re-enabled and health probes confirm it is healthy. This demonstrates the complete failover and failback cycle.

Terminal window
az group delete --name "$RG" --yes --no-wait
  • Two web servers deployed in different Azure regions
  • Traffic Manager profile created with Priority routing and 10-second TTL
  • Both endpoints added with correct priorities
  • Normal operation confirmed (traffic routes to primary)
  • Failover verified (disabling primary routes traffic to secondary)
  • Failback verified (re-enabling primary restores original routing)

Module 3.6: Azure Container Registry (ACR) --- Learn how to store, manage, and secure your container images with Azure Container Registry, including authentication, ACR Tasks for automated builds, and geo-replication.