Skip to main content
โšก Calmops

Global Service Deployment: Achieving Sub-20ms Latency with Geo-DNS and CDN

Global Service Deployment: Achieving Sub-20ms Latency with Geo-DNS and CDN

A user in Tokyo clicks your website. In New York, another user does the same. Both expect the page to load in under 2 seconds. Without proper global infrastructure, the Tokyo user might wait 5+ seconds while data travels across the Pacific, while the New York user enjoys a fast experience. This geographic disparity in performance is unacceptable in today’s competitive landscape.

Global service deployment is no longer optionalโ€”it’s essential for maintaining user engagement and competitive advantage. Users in every region expect fast, reliable access. Achieving this requires a strategic combination of Geo-DNS for intelligent routing and CDN for distributed content delivery.

In this guide, we’ll explore how to architect a global infrastructure that delivers consistent sub-20ms latency to users worldwide.

Understanding the Challenge

The Physics of Latency

Network latency is fundamentally limited by physics. Light travels at approximately 300,000 km/s through fiber optic cables, but real-world latency is higher due to:

  • Geographic distance: Tokyo to New York is ~10,800 km
  • Network routing: Packets don’t travel in straight lines
  • Processing delays: Routers, switches, and servers add latency
  • Congestion: Network traffic causes additional delays

Latency estimates by distance:

  • Same city: 1-5ms
  • Same continent: 10-50ms
  • Intercontinental: 100-300ms
  • Satellite: 500-700ms

The Performance Impact

Research shows that every 100ms of additional latency reduces conversion rates by 1-7% depending on industry. A user in Tokyo experiencing 300ms latency instead of 20ms is significantly more likely to abandon your site.

Geo-DNS: Intelligent Request Routing

Geo-DNS routes users to the nearest or most appropriate server based on their geographic location. Instead of a single IP address, Geo-DNS returns different IPs depending on where the request originates.

How Geo-DNS Works

User in Tokyo queries example.com
    โ†“
Geo-DNS resolver detects location (Tokyo)
    โ†“
Returns IP of Tokyo data center
    โ†“
User connects to nearest server

User in London queries example.com
    โ†“
Geo-DNS resolver detects location (London)
    โ†“
Returns IP of London data center
    โ†“
User connects to nearest server

AWS Route 53

Route 53 offers multiple routing policies:

# Geolocation routing: Route based on geographic location
Type: AWS::Route53::RecordSet
Properties:
  HostedZoneId: Z1234567890ABC
  Name: example.com
  Type: A
  SetIdentifier: Tokyo
  GeoLocation:
    CountryCode: JP
  AliasTarget:
    HostedZoneId: Z1234567890ABC
    DNSName: tokyo.example.com
    EvaluateTargetHealth: true

# Latency-based routing: Route to lowest latency endpoint
Type: AWS::Route53::RecordSet
Properties:
  HostedZoneId: Z1234567890ABC
  Name: example.com
  Type: A
  SetIdentifier: Tokyo
  Region: ap-northeast-1
  TTL: 60
  ResourceRecords:
    - 203.0.113.1
  HealthCheckId: abc123

Cloudflare DNS

Cloudflare uses a global anycast network for DNS resolution:

# Configure geographic routing via Cloudflare API
curl -X POST "https://api.cloudflare.com/client/v4/zones/{zone_id}/load_balancers" \
  -H "Authorization: Bearer {token}" \
  -d '{
    "name": "example.com",
    "description": "Global load balancer",
    "ttl": 30,
    "default_pools": ["pool_tokyo"],
    "region_pools": {
      "WNAM": ["pool_us_west"],
      "ENAM": ["pool_us_east"],
      "WEUR": ["pool_eu_west"],
      "EASIA": ["pool_asia"]
    }
  }'

Google Cloud DNS

Google Cloud DNS provides geographic routing with DNSSEC:

# Google Cloud DNS geographic routing
apiVersion: dns.cnrm.cloud.google.com/v1beta1
kind: DNSRecordSet
metadata:
  name: example-com
spec:
  name: example.com.
  type: A
  ttl: 300
  managedZoneRef:
    name: example-zone
  routingPolicy:
    geoPolicy:
      items:
      - location: asia-east1
        rrdatas:
        - 203.0.113.1
      - location: us-central1
        rrdatas:
        - 198.51.100.1
      - location: europe-west1
        rrdatas:
        - 192.0.2.1

Azure Traffic Manager

Azure Traffic Manager provides performance-based routing:

{
  "name": "example-traffic-manager",
  "type": "Microsoft.Network/trafficManagerProfiles",
  "apiVersion": "2018-08-01",
  "properties": {
    "profileStatus": "Enabled",
    "trafficRoutingMethod": "Performance",
    "dnsConfig": {
      "relativeName": "example",
      "ttl": 60
    },
    "monitorConfig": {
      "protocol": "HTTPS",
      "port": 443,
      "path": "/health"
    },
    "endpoints": [
      {
        "name": "tokyo-endpoint",
        "type": "azureEndpoints",
        "properties": {
          "targetResourceId": "/subscriptions/.../tokyo-app-service",
          "endpointStatus": "Enabled"
        }
      },
      {
        "name": "us-endpoint",
        "type": "azureEndpoints",
        "properties": {
          "targetResourceId": "/subscriptions/.../us-app-service",
          "endpointStatus": "Enabled"
        }
      }
    ]
  }
}

CDN: Distributed Content Delivery

CDNs cache static content at edge locations worldwide, serving content from servers closest to users. This dramatically reduces latency for static assets and reduces load on origin servers.

How CDNs Work

User requests image.jpg
    โ†“
CDN edge server checks cache
    โ†“
If cached: Serve immediately (1-5ms)
If not cached: Fetch from origin, cache, serve (50-200ms)
    โ†“
Subsequent requests served from cache

Cloudflare CDN

Cloudflare operates 200+ data centers with automatic DDoS protection:

# Configure Cloudflare CDN caching rules
curl -X POST "https://api.cloudflare.com/client/v4/zones/{zone_id}/cache_rules" \
  -H "Authorization: Bearer {token}" \
  -d '{
    "rules": [
      {
        "action": "set_cache_settings",
        "action_parameters": {
          "cache": true,
          "cache_ttl": 86400,
          "cache_on_cookie": "session_id"
        },
        "expression": "(cf.cache_status eq \"MISS\")"
      }
    ]
  }'

AWS CloudFront

CloudFront integrates with AWS services and supports Lambda@Edge:

# CloudFront distribution configuration
Type: AWS::CloudFront::Distribution
Properties:
  DistributionConfig:
    Enabled: true
    DefaultCacheBehavior:
      ViewerProtocolPolicy: redirect-to-https
      CachePolicyId: 658327ea-f89d-4fab-a63d-7e88639e58f6
      OriginRequestPolicyId: 216adef5-5c7f-47e4-b989-5492eafa07d3
      TargetOriginId: myOrigin
      LambdaFunctionAssociations:
        - EventType: viewer-request
          LambdaFunctionARN: arn:aws:lambda:us-east-1:123456789012:function:my-function:1
    Origins:
      - Id: myOrigin
        DomainName: origin.example.com
        CustomOriginConfig:
          HTTPPort: 80
          OriginProtocolPolicy: http-only
    CacheBehaviors:
      - PathPattern: /api/*
        ViewerProtocolPolicy: https-only
        CachePolicyId: 4135ea3d-c35d-46eb-81d7-reeSJmXQQpQ
        TargetOriginId: myOrigin

Fastly

Fastly provides real-time purging and instant configuration updates:

{
  "name": "example-cdn",
  "domains": [
    {
      "name": "example.com"
    }
  ],
  "backends": [
    {
      "name": "origin",
      "address": "origin.example.com",
      "port": 443,
      "use_ssl": true,
      "ssl_cert_hostname": "origin.example.com"
    }
  ],
  "cache_settings": [
    {
      "name": "default",
      "action": "cache",
      "ttl": 3600,
      "stale_ttl": 86400
    }
  ],
  "conditions": [
    {
      "name": "static_assets",
      "statement": "req.url ~ \"\\.(jpg|png|css|js)$\""
    }
  ]
}

Akamai

Akamai operates the largest CDN network with advanced security:

<!-- Akamai EdgeKV configuration -->
<config>
  <caching>
    <ttl>3600</ttl>
    <key-expiration>86400</key-expiration>
  </caching>
  <edge-locations>
    <location region="asia">
      <cache-size>100GB</cache-size>
      <bandwidth>10Gbps</bandwidth>
    </location>
    <location region="europe">
      <cache-size>100GB</cache-size>
      <bandwidth>10Gbps</bandwidth>
    </location>
    <location region="americas">
      <cache-size>100GB</cache-size>
      <bandwidth>10Gbps</bandwidth>
    </location>
  </edge-locations>
</config>

Google Cloud CDN

Google Cloud CDN integrates with Cloud Load Balancing:

apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeBackendService
metadata:
  name: example-backend
spec:
  protocol: HTTPS
  portName: https
  timeoutSec: 30
  enableCDN: true
  cdnPolicy:
    cacheMode: CACHE_ALL_STATIC
    clientTtl: 3600
    defaultTtl: 3600
    maxTtl: 86400
    negativeCaching: true
    negativeCachingPolicy:
    - code: 404
      ttl: 120
    - code: 410
      ttl: 120
  healthChecks:
  - name: example-health-check

Combining Geo-DNS and CDN

The most effective global deployment combines both technologies:

Architecture Pattern

User Request
    โ†“
Geo-DNS Resolution
    โ”œโ”€ Static Assets โ†’ CDN Edge (1-5ms)
    โ””โ”€ Dynamic Requests โ†’ Regional Origin (20-50ms)
    โ†“
Combined Latency: 20-50ms

Implementation Strategy

Step 1: Set up regional origins

Deploy application servers in strategic regions:

  • North America (us-east-1, us-west-2)
  • Europe (eu-west-1, eu-central-1)
  • Asia-Pacific (ap-southeast-1, ap-northeast-1)

Step 2: Configure Geo-DNS

Route users to nearest regional origin:

# Route 53 configuration
Resources:
  TokyoRecord:
    Type: AWS::Route53::RecordSet
    Properties:
      HostedZoneId: Z1234567890ABC
      Name: api.example.com
      Type: A
      SetIdentifier: Tokyo
      GeoLocation:
        CountryCode: JP
      AliasTarget:
        HostedZoneId: Z1234567890ABC
        DNSName: tokyo-api.example.com
        EvaluateTargetHealth: true

  USRecord:
    Type: AWS::Route53::RecordSet
    Properties:
      HostedZoneId: Z1234567890ABC
      Name: api.example.com
      Type: A
      SetIdentifier: US
      GeoLocation:
        CountryCode: US
      AliasTarget:
        HostedZoneId: Z1234567890ABC
        DNSName: us-api.example.com
        EvaluateTargetHealth: true

Step 3: Configure CDN

Cache static assets globally:

# CloudFront configuration for static assets
Type: AWS::CloudFront::Distribution
Properties:
  DistributionConfig:
    DefaultCacheBehavior:
      PathPattern: /static/*
      ViewerProtocolPolicy: https-only
      CachePolicyId: 658327ea-f89d-4fab-a63d-7e88639e58f6
      TargetOriginId: staticOrigin
    CacheBehaviors:
      - PathPattern: /api/*
        ViewerProtocolPolicy: https-only
        CachePolicyId: 4135ea3d-c35d-46eb-81d7-reeSJmXQQpQ
        TargetOriginId: dynamicOrigin

Step 4: Implement cache headers

Control caching behavior from origin:

# Flask example
from flask import Flask, make_response

app = Flask(__name__)

@app.route('/static/<path:filename>')
def serve_static(filename):
    response = make_response(send_file(filename))
    response.headers['Cache-Control'] = 'public, max-age=31536000, immutable'
    response.headers['CDN-Cache-Control'] = 'max-age=31536000'
    return response

@app.route('/api/data')
def api_data():
    response = make_response(jsonify(data))
    response.headers['Cache-Control'] = 'private, max-age=0, must-revalidate'
    return response

Performance Optimization Techniques

Connection Pooling

Reuse connections to reduce overhead:

import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry

session = requests.Session()
retry = Retry(connect=3, backoff_factor=0.5)
adapter = HTTPAdapter(max_retries=retry, pool_connections=10, pool_maxsize=10)
session.mount('http://', adapter)
session.mount('https://', adapter)

# Connections are reused across requests
response = session.get('https://api.example.com/data')

Protocol Optimization

Use HTTP/2 and HTTP/3 for faster connections:

# Nginx configuration for HTTP/2
server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    
    ssl_certificate /etc/ssl/certs/example.com.crt;
    ssl_certificate_key /etc/ssl/private/example.com.key;
    
    # Enable HTTP/2 Server Push
    http2_push_preload on;
    
    location / {
        proxy_pass http://backend;
        proxy_http_version 2.0;
    }
}

Edge Computing

Process requests at edge locations:

// Cloudflare Workers example
addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request))
})

async function handleRequest(request) {
  // Check cache
  const cache = caches.default
  let response = await cache.match(request)
  
  if (!response) {
    // Fetch from origin
    response = await fetch(request)
    
    // Cache response
    if (response.status === 200) {
      event.waitUntil(cache.put(request, response.clone()))
    }
  }
  
  return response
}

Monitoring and Optimization

Key Metrics to Track

# Monitoring implementation
import time
from prometheus_client import Histogram, Counter

request_latency = Histogram(
    'request_latency_seconds',
    'Request latency',
    ['region', 'endpoint']
)

cache_hits = Counter(
    'cache_hits_total',
    'Cache hits',
    ['region']
)

@app.before_request
def start_timer():
    request.start_time = time.time()

@app.after_request
def record_metrics(response):
    latency = time.time() - request.start_time
    region = request.headers.get('CloudFront-Viewer-Country', 'unknown')
    
    request_latency.labels(
        region=region,
        endpoint=request.path
    ).observe(latency)
    
    if response.headers.get('X-Cache') == 'Hit':
        cache_hits.labels(region=region).inc()
    
    return response

Health Checks

Ensure failover works correctly:

# Route 53 health check configuration
Type: AWS::Route53::HealthCheck
Properties:
  Type: HTTPS
  ResourcePath: /health
  FullyQualifiedDomainName: tokyo-api.example.com
  Port: 443
  RequestInterval: 30
  FailureThreshold: 3
  MeasureLatency: true
  EnableSNI: true

Fallback and Redundancy

Multi-Region Failover

# Route 53 failover configuration
PrimaryRecord:
  Type: AWS::Route53::RecordSet
  Properties:
    HostedZoneId: Z1234567890ABC
    Name: api.example.com
    Type: A
    SetIdentifier: Primary
    Failover: PRIMARY
    AliasTarget:
      HostedZoneId: Z1234567890ABC
      DNSName: primary-api.example.com
      EvaluateTargetHealth: true
    HealthCheckId: primary-health-check

SecondaryRecord:
  Type: AWS::Route53::RecordSet
  Properties:
    HostedZoneId: Z1234567890ABC
    Name: api.example.com
    Type: A
    SetIdentifier: Secondary
    Failover: SECONDARY
    AliasTarget:
      HostedZoneId: Z1234567890ABC
      DNSName: secondary-api.example.com
      EvaluateTargetHealth: true
    HealthCheckId: secondary-health-check

Conclusion

Achieving sub-20ms latency globally requires a strategic combination of Geo-DNS for intelligent routing and CDN for distributed content delivery. By implementing this architecture:

  • Geo-DNS routes users to the nearest regional origin (20-50ms)
  • CDN serves static assets from edge locations (1-5ms)
  • Combined approach delivers consistent performance worldwide

Key takeaways:

  1. Geographic distance matters: Physics limits latency; minimize it with regional deployments
  2. Geo-DNS enables intelligent routing: Route users to nearest servers automatically
  3. CDN accelerates static content: Cache assets at edge locations globally
  4. Combine both technologies: Use Geo-DNS for dynamic content, CDN for static assets
  5. Monitor continuously: Track latency, cache hit rates, and health metrics
  6. Plan for failover: Ensure redundancy and automatic failover mechanisms

The investment in global infrastructure pays dividends in user satisfaction, reduced bounce rates, and improved conversion rates. Start with a few strategic regions, monitor performance, and expand based on user distribution and business requirements.

Your users worldwide deserve fast, reliable access. Build the infrastructure to deliver it.

Comments