Skip to main content
โšก Calmops

Istio Ambient Mesh: Sidecar-Free Service Mesh Setup and Migration

Introduction

Traditional Istio injects an Envoy sidecar into every pod. At 1,000 pods, that’s 1,000 extra containers consuming ~50MB RAM each โ€” 50GB just for proxies. Ambient Mesh eliminates per-pod sidecars by moving mesh functionality to the node level (ztunnel) and namespace level (waypoints).

Sidecar vs Ambient:

Sidecar mesh:
  Pod = [app container] + [envoy sidecar]
  1000 pods = 1000 sidecars = ~50GB RAM overhead
  Upgrade = rolling restart of ALL pods

Ambient mesh:
  Pod = [app container]  (no sidecar!)
  Node = [ztunnel DaemonSet]  (one per node)
  Namespace = [waypoint proxy]  (optional, for L7 policies)
  Upgrade = update ztunnel DaemonSet (no app restarts)

Installing Istio Ambient Mesh

# Install istioctl
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.21.0 sh -
export PATH=$PWD/istio-1.21.0/bin:$PATH

# Install Istio with ambient profile
istioctl install --set profile=ambient --set "components.ingressGateways[0].enabled=true" -y

# Verify installation
kubectl get pods -n istio-system
# NAME                                    READY   STATUS
# istiod-xxx                              1/1     Running
# ztunnel-xxx (one per node)              1/1     Running
# istio-ingressgateway-xxx                1/1     Running

istioctl verify-install

Enabling Ambient Mode for a Namespace

# Label namespace to enable ambient mesh (no pod restarts needed!)
kubectl label namespace default istio.io/dataplane-mode=ambient

# Verify โ€” pods should NOT have sidecar containers
kubectl get pods -n default
# NAME          READY   STATUS
# myapp-xxx     1/1     Running  โ† only 1 container, no sidecar!

# Check ztunnel is handling traffic
kubectl logs -n istio-system -l app=ztunnel | grep "default"

Layer 1: ztunnel (Automatic mTLS)

Once a namespace is in ambient mode, ztunnel automatically:

  • Encrypts all pod-to-pod traffic with mTLS
  • Verifies service identity (SPIFFE/X.509)
  • Collects L4 metrics (bytes, connections)
# Verify mTLS is working
kubectl exec -it sleep-xxx -- curl http://httpbin:8000/headers
# X-Forwarded-Client-Cert header should appear, proving mTLS

# Check ztunnel metrics
kubectl exec -n istio-system ztunnel-xxx -- curl localhost:15020/metrics | grep ztunnel_

Layer 2: Waypoint Proxies (L7 Policies)

Waypoints are optional. Add them when you need HTTP-level policies (retries, circuit breaking, header manipulation):

# Create a waypoint for a namespace
istioctl waypoint apply --namespace default

# Or for a specific service account
istioctl waypoint apply --namespace default --name reviews-waypoint \
    --for service

# Verify waypoint is running
kubectl get pods -n default -l gateway.istio.io/managed=istio.io-mesh-controller
# NAME                        READY   STATUS
# waypoint-xxx                1/1     Running

Traffic Policies via Waypoint

# Retry policy (requires waypoint)
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: reviews
  namespace: default
spec:
  hosts:
  - reviews
  http:
  - route:
    - destination:
        host: reviews
    retries:
      attempts: 3
      perTryTimeout: 2s
      retryOn: 5xx,reset,connect-failure
    timeout: 10s
# Circuit breaker (requires waypoint)
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: reviews
spec:
  host: reviews
  trafficPolicy:
    outlierDetection:
      consecutive5xxErrors: 5
      interval: 30s
      baseEjectionTime: 30s
# Canary deployment: 90% v1, 10% v2
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
  - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v1
      weight: 90
    - destination:
        host: reviews
        subset: v2
      weight: 10

Security Policies

# Deny all traffic by default, then allow explicitly
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: deny-all
  namespace: default
spec: {}  # empty spec = deny all

---
# Allow frontend to call backend
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: allow-frontend-to-backend
  namespace: default
spec:
  selector:
    matchLabels:
      app: backend
  action: ALLOW
  rules:
  - from:
    - source:
        principals: ["cluster.local/ns/default/sa/frontend"]
    to:
    - operation:
        methods: ["GET", "POST"]
        paths: ["/api/*"]
# Require mTLS for all traffic in namespace
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: default
spec:
  mtls:
    mode: STRICT  # reject non-mTLS connections

Migrating from Sidecar to Ambient

Istio supports running both modes simultaneously during migration:

# Step 1: Install ambient alongside existing sidecar mesh
istioctl install --set profile=ambient -y
# Existing sidecar pods continue working

# Step 2: Migrate one namespace at a time
kubectl label namespace staging istio.io/dataplane-mode=ambient

# Step 3: Remove sidecar injection from migrated namespace
kubectl label namespace staging istio-injection-

# Step 4: Restart pods to remove sidecars
kubectl rollout restart deployment -n staging

# Step 5: Verify traffic still works
kubectl exec -n staging sleep-xxx -- curl http://httpbin:8000/get

# Step 6: Repeat for other namespaces

Checking Migration Status

# See which pods still have sidecars
kubectl get pods -A -o json | jq '.items[] | select(.spec.containers | length > 1) | .metadata.name'

# Check ambient enrollment
kubectl get pods -n default -o json | jq '.items[].metadata.annotations["ambient.istio.io/redirection"]'
# Should show "enabled" for ambient pods

Resource Comparison

# Measure sidecar overhead
kubectl top pods -n default --containers | grep istio-proxy
# istio-proxy containers typically use 50-100MB RAM each

# Measure ztunnel overhead
kubectl top pods -n istio-system -l app=ztunnel
# One ztunnel per node, typically 10-30MB RAM

# Calculate savings for 100 pods on 10 nodes:
# Sidecar: 100 ร— 75MB = 7,500MB
# Ambient: 10 ร— 20MB = 200MB
# Savings: ~7,300MB (97% reduction)

Observability

# L4 metrics from ztunnel (always available)
kubectl exec -n istio-system ztunnel-xxx -- curl -s localhost:15020/metrics | \
    grep -E "istio_tcp_connections|istio_tcp_sent_bytes"

# L7 metrics (requires waypoint)
kubectl exec -n default waypoint-xxx -- curl -s localhost:15020/metrics | \
    grep -E "istio_requests_total|istio_request_duration"

# Access logs
kubectl logs -n istio-system ztunnel-xxx | grep "default/myapp"

# Kiali dashboard (service topology)
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.21/samples/addons/kiali.yaml
istioctl dashboard kiali

Ambient vs Sidecar: When to Use Each

Scenario Recommendation
New Kubernetes cluster Ambient mesh
Existing sidecar deployment Migrate gradually
Need L7 policies (retries, circuit breaking) Ambient + waypoints
Only need mTLS + basic metrics Ambient without waypoints
Complex per-pod traffic policies Sidecar (more granular)
Resource-constrained cluster Ambient (much less overhead)
Frequent app deployments Ambient (no sidecar injection delays)

Resources

Comments