Introduction
WireGuard consistently outperforms other VPN protocols in throughput and latency benchmarks. This guide covers real benchmark numbers, explains why WireGuard is fast, and shows how to tune it for maximum performance.
Why WireGuard is Fast
Traditional VPN (OpenVPN):
Userspace process โ TLS overhead โ TCP tunneling โ kernel network stack
CPU: high (TLS + TCP-in-TCP)
Latency: +5-15ms overhead
WireGuard:
Kernel module โ ChaCha20-Poly1305 โ UDP โ kernel network stack
CPU: minimal (kernel-level, hardware-friendly cipher)
Latency: +1-3ms overhead
Key performance advantages:
- Kernel module โ no userspace/kernel context switching
- ChaCha20-Poly1305 โ fast on all hardware, especially without AES-NI
- UDP transport โ no TCP-in-TCP overhead
- Minimal codebase โ ~4,000 lines vs hundreds of thousands for OpenVPN
Benchmark Results
Throughput (1 Gbps link, modern server)
| Protocol | Throughput | CPU Usage | Notes |
|---|---|---|---|
| WireGuard | 950 Mbps | 8% | Near wire-speed |
| IPsec/IKEv2 | 850 Mbps | 12% | With kernel offload |
| OpenVPN (UDP) | 450 Mbps | 35% | Userspace limitation |
| OpenVPN (TCP) | 300 Mbps | 45% | TCP-in-TCP overhead |
| SoftEther SSL | 700 Mbps | 20% | Good but more overhead |
Latency Added by VPN
# Test latency overhead
# Without VPN:
ping -c 100 10.0.0.1 | tail -1
# rtt min/avg/max/mdev = 0.8/1.2/2.1/0.3 ms
# With WireGuard:
ping -c 100 10.0.0.1 # through WireGuard tunnel
# rtt min/avg/max/mdev = 1.9/2.4/3.8/0.4 ms
# Overhead: ~1.2ms
# With OpenVPN:
# rtt min/avg/max/mdev = 6.2/8.1/15.3/2.1 ms
# Overhead: ~7ms
Real-World File Transfer
# Test with iperf3
# Server side:
iperf3 -s
# Client side (through WireGuard):
iperf3 -c 10.0.0.1 -t 30 -P 4
# Typical result: 920-950 Mbps on 1 Gbps link
# Compare with OpenVPN:
# Typical result: 400-450 Mbps on same hardware
Setting Up WireGuard for Performance
Server Configuration
# /etc/wireguard/wg0.conf โ server
[Interface]
Address = 10.0.0.1/24
ListenPort = 51820
PrivateKey = <server-private-key>
# Performance tuning
# MTU: set explicitly to avoid fragmentation
MTU = 1420
# Post-up rules for routing
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
[Peer]
PublicKey = <client-public-key>
AllowedIPs = 10.0.0.2/32
# PersistentKeepalive not needed on server side
Client Configuration
# /etc/wireguard/wg0.conf โ client
[Interface]
Address = 10.0.0.2/24
PrivateKey = <client-private-key>
DNS = 1.1.1.1
MTU = 1420
[Peer]
PublicKey = <server-public-key>
Endpoint = vpn.example.com:51820
AllowedIPs = 0.0.0.0/0 # route all traffic through VPN
PersistentKeepalive = 25 # keep NAT mapping alive
MTU Optimization
Wrong MTU causes fragmentation, which kills performance. WireGuard adds 60 bytes of overhead to each packet.
# Find your network's MTU
ping -M do -s 1472 vpn.example.com # 1472 + 28 (IP+ICMP header) = 1500
# If this works: your path MTU is 1500
# If it fails: reduce size until it works
# WireGuard MTU = path MTU - 60 (WireGuard overhead)
# For 1500 MTU path: WireGuard MTU = 1500 - 60 = 1440
# For 1480 MTU path (PPPoE): WireGuard MTU = 1480 - 60 = 1420
# Set MTU in wg0.conf
MTU = 1420 # safe default for most networks
# Or set dynamically
ip link set dev wg0 mtu 1420
# Verify MTU is working (no fragmentation)
ping -M do -s 1380 10.0.0.1 # 1380 + 28 = 1408, well under 1420
# Should succeed
ping -M do -s 1400 10.0.0.1 # 1400 + 28 = 1428, over 1420
# Should fail with "Frag needed"
Kernel Tuning for High Throughput
For servers handling many clients or high bandwidth:
# /etc/sysctl.d/99-wireguard.conf
# Increase network buffer sizes
net.core.rmem_max = 134217728
net.core.wmem_max = 134217728
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
# TCP buffer tuning (for traffic passing through the VPN)
net.ipv4.tcp_rmem = 4096 87380 134217728
net.ipv4.tcp_wmem = 4096 65536 134217728
# Increase connection tracking table
net.netfilter.nf_conntrack_max = 1048576
# Enable IP forwarding (required for routing)
net.ipv4.ip_forward = 1
net.ipv6.conf.all.forwarding = 1
# Reduce TIME_WAIT sockets
net.ipv4.tcp_tw_reuse = 1
# Apply
sysctl -p /etc/sysctl.d/99-wireguard.conf
Multi-Queue for High Throughput
WireGuard’s kernel module can use multiple CPU cores with multi-queue:
# Check current queue settings
ethtool -l eth0
# Set multiple queues on the WireGuard interface
ip link set wg0 txqueuelen 1000
# For very high throughput (10 Gbps+), use multiple WireGuard instances
# wg0 on port 51820 โ CPU 0-3
# wg1 on port 51821 โ CPU 4-7
# Load balance with ECMP routing
Monitoring Performance
# Real-time WireGuard stats
watch -n 1 'wg show wg0'
# Output includes:
# latest handshake: 2 minutes, 15 seconds ago
# transfer: 1.23 GiB received, 456 MiB sent
# Detailed interface stats
ip -s link show wg0
# Packet loss and latency
ping -c 1000 -i 0.01 10.0.0.1 | tail -3
# 1000 packets transmitted, 998 received, 0.2% packet loss
# rtt min/avg/max/mdev = 1.8/2.3/8.1/0.4 ms
# Throughput test
iperf3 -c 10.0.0.1 -t 60 -P 8 # 8 parallel streams, 60 seconds
Troubleshooting Performance Issues
Symptom: Low throughput despite fast hardware
# Check CPU usage during transfer
top -d 1 # look for ksoftirqd or wireguard processes
# Check for packet drops
ip -s link show wg0 | grep -A2 "RX\|TX"
# Look for: dropped > 0
# Check interrupt affinity
cat /proc/interrupts | grep eth0
# If all interrupts on CPU 0, set affinity:
echo 0f > /proc/irq/$(grep eth0 /proc/interrupts | awk '{print $1}' | tr -d :)/smp_affinity
Symptom: High latency spikes
# Check for packet loss (causes retransmission)
ping -c 100 -i 0.1 10.0.0.1 | grep -E "loss|mdev"
# Check WireGuard handshake timing
wg show wg0 | grep "latest handshake"
# If > 3 minutes: connection may be stale, check firewall/NAT
# Check for MTU issues (fragmentation causes latency)
ping -M do -s 1400 10.0.0.1
# If fails: reduce MTU in wg0.conf
Symptom: Connection drops
# Increase keepalive
# In [Peer] section:
PersistentKeepalive = 15 # more frequent than default 25s
# Check NAT timeout on your router
# Most routers timeout UDP after 30-300 seconds
# PersistentKeepalive must be less than this value
Performance vs Other Protocols: When to Choose What
| Scenario | Best Choice | Reason |
|---|---|---|
| Maximum throughput | WireGuard | Kernel-level, minimal overhead |
| Mobile clients | WireGuard | Battery-efficient, fast reconnect |
| Legacy clients (Windows XP, etc.) | OpenVPN | Broad compatibility |
| Corporate firewall traversal | OpenVPN TCP 443 | Looks like HTTPS |
| Site-to-site, existing IPsec infra | IPsec/IKEv2 | Native integration |
| Multi-protocol support needed | SoftEther | Supports all protocols |
Resources
- WireGuard Performance
- WireGuard Whitepaper
- iperf3 โ network throughput testing
- Linux Network Tuning
Comments