Introduction
WebSocket has revolutionized real-time web communication by providing persistent, bidirectional connections between clients and servers. Unlike traditional HTTP request-response patterns, WebSockets enable servers to push data to clients instantly, making them ideal for chat applications, live dashboards, collaborative tools, and gaming platforms.
This comprehensive guide explores WebSocket proxying patterns, covering fundamental concepts, implementation strategies, load balancing, and production best practices for deploying real-time applications at scale.
Understanding WebSockets
What is WebSocket?
WebSocket is a communication protocol that provides full-duplex channels over a single TCP connection. It starts as an HTTP upgrade handshake and then transitions to a persistent WebSocket connection.
HTTP Request (Upgrade):
GET /ws HTTP/1.1
Host: example.com
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Version: 13
Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==
HTTP Response:
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=
Key Features
WebSocket provides several capabilities essential for real-time applications:
- Full-Duplex Communication - Both client and server can send data independently
- Low Overhead - No HTTP headers after initial handshake
- Persistent Connections - Single connection for multiple messages
- Bi-directional - Server can push data without client request
- Cross-Origin Support - Can be used with CORS
WebSocket Proxy Architecture
Why Proxy WebSockets?
Proxying WebSocket connections provides several benefits:
- Load Distribution - Spread connections servers across multiple backend
- SSL/TLS Termination - Centralized certificate management
- Authentication - Unified authentication layer
- Rate Limiting - Control connection frequency
- Logging - Centralized request logging
Proxy Options
| Proxy | WebSocket Support | Best For |
|---|---|---|
| NGINX | Native (1.3+) | High-performance edge |
| HAProxy | Native (1.5+) | Load balancing |
| Traefik | Native | Container orchestration |
| Envoy | Native | Service mesh |
| Cloudflare | Native | Edge deployment |
NGINX WebSocket Proxying
Basic Configuration
server {
listen 80;
server_name example.com;
location /ws/ {
proxy_pass http://backend_servers;
proxy_http_version 1.1;
# Required for WebSocket
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Forward client information
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts
proxy_read_timeout 86400;
proxy_send_timeout 86400;
}
}
SSL/TLS Termination
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
location /ws/ {
proxy_pass http://backend_servers;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_read_timeout 86400;
proxy_send_timeout 86400;
}
}
WebSocket Load Balancing
upstream websocket_backend {
least_conn;
server backend1.example.com:8080 weight=3;
server backend2.example.com:8080 weight=2;
server backend3.example.com:8080 weight=1;
keepalive 32;
}
server {
location /ws/ {
proxy_pass http://websocket_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_read_timeout 86400;
proxy_send_timeout 86400;
}
}
HAProxy WebSocket Configuration
Basic Setup
frontend http_front
bind *:80
bind *:443 ssl crt /etc/ssl/certs/server.pem
# WebSocket support
option http-server-close
option forceclose
http-request set-header X-Proto https if { ssl_fc }
# Route WebSocket connections
use_backend ws_backend if { hdr(Upgrade) -i websocket } || { hdr_secure(Upgrade) -i websocket }
default_backend http_backend
backend ws_backend
option httpchk GET /health
http-check expect status 101
server ws1 backend1.example.com:8080 check inter 2000 rise 2 fall 3
server ws2 backend2.example.com:8080 check inter 2000 rise 2 fall 3
server ws3 backend3.example.com:8080 check inter 2000 rise 2 fall 3
timeout server 86400
timeout connect 5s
backend http_backend
balance roundrobin
server http1 backend1.example.com:8081 check
server http2 backend2.example.com:8081 check
Sticky Sessions
backend ws_backend
balance source
# Sticky sessions based on source IP
stick-table type ip size 1m expire 30m
server ws1 backend1.example.com:8080 check inter 2000 rise 2 fall 3
server ws2 backend2.example.com:8080 check inter 2000 rise 2 fall 3
Cloud Deployment
AWS Application Load Balancer
# ALB WebSocket target group
Resources:
WSTargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
Name: websocket-target-group
Protocol: HTTP
Port: 8080
VpcId: !Ref VpcId
HealthCheckPath: /health
HealthCheckIntervalSeconds: 30
HealthyThresholdCount: 2
UnhealthyThresholdCount: 2
TargetType: instance
# Enable sticky sessions
TargetGroupAttributes:
- Key: stickiness.enabled
Value: "true"
- Key: stickiness.type
Value: lb_cookie
- Key: stickiness.lb_cookie.duration_seconds
Value: "86400"
Kubernetes Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: websocket-ingress
annotations:
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/websocket-services: "websocket-service"
spec:
rules:
- host: api.example.com
http:
paths:
- path: /ws
pathType: Prefix
backend:
service:
name: websocket-service
port:
number: 80
tls:
- hosts:
- api.example.com
secretName: api-tls
Cloudflare
# Cloudflare Worker for WebSocket routing
addEventListener('fetch', event => {
const url = new URL(event.request.url);
if (url.pathname.startsWith('/ws/')) {
// Route to WebSocket backend
return fetch(`https://backend.example.com${url.pathname}`, {
headers: {
'Upgrade': event.request.headers.get('Upgrade'),
'Connection': 'upgrade',
},
});
}
return event.respondWith(new Response('Not Found', { status: 404 }));
});
Client Implementation
Browser WebSocket
class WebSocketClient {
constructor(url, options = {}) {
this.url = url;
this.reconnectInterval = options.reconnectInterval || 1000;
this.maxReconnectAttempts = options.maxReconnectAttempts || 5;
this.reconnectAttempts = 0;
this.handlers = {
open: [],
message: [],
close: [],
error: []
};
}
connect() {
this.ws = new WebSocket(this.url);
this.ws.onopen = (event) => {
console.log('WebSocket connected');
this.reconnectAttempts = 0;
this.handlers.open.forEach(h => h(event));
};
this.ws.onmessage = (event) => {
this.handlers.message.forEach(h => h(event));
};
this.ws.onclose = (event) => {
console.log('WebSocket disconnected');
this.handlers.close.forEach(h => h(event));
this.attemptReconnect();
};
this.ws.onerror = (event) => {
console.error('WebSocket error:', event);
this.handlers.error.forEach(h => h(event));
};
}
attemptReconnect() {
if (this.reconnectAttempts < this.maxReconnectAttempts) {
this.reconnectAttempts++;
setTimeout(() => {
console.log(`Reconnecting... attempt ${this.reconnectAttempts}`);
this.connect();
}, this.reconnectInterval);
}
}
send(data) {
if (this.ws && this.ws.readyState === WebSocket.OPEN) {
this.ws.send(JSON.stringify(data));
}
}
on(event, handler) {
if (this.handlers[event]) {
this.handlers[event].push(handler);
}
}
close() {
this.maxReconnectAttempts = 0;
this.ws.close();
}
}
// Usage
const ws = new WebSocketClient('wss://api.example.com/ws');
ws.on('open', () => {
ws.send({ type: 'subscribe', channel: 'updates' });
});
ws.on('message', (event) => {
console.log('Received:', JSON.parse(event.data));
});
ws.connect();
Socket.IO
const { Server } = require('socket.io');
const io = new Server(3000, {
cors: {
origin: "*",
methods: ["GET", "POST"]
},
pingTimeout: 60000,
pingInterval: 25000
});
io.on('connection', (socket) => {
console.log('Client connected:', socket.id);
socket.on('message', (data) => {
console.log('Message received:', data);
socket.emit('response', { status: 'ok' });
});
socket.on('disconnect', () => {
console.log('Client disconnected');
});
// Room management
socket.on('join-room', (room) => {
socket.join(room);
socket.to(room).emit('user-joined', socket.id);
});
});
// Namespace for admin
const admin = io.of('/admin');
admin.on('connection', (socket) => {
console.log('Admin connected');
});
Connection Management
Server-Side Implementation
import asyncio
import websockets
from collections import defaultdict
class ConnectionManager:
def __init__(self):
self.active_connections = defaultdict(set)
self.rooms = defaultdict(set)
async def connect(self, websocket, client_id):
await websocket.accept()
self.active_connections['global'].add(websocket)
self.active_connections[client_id].add(websocket)
def disconnect(self, websocket, client_id):
self.active_connections['global'].discard(websocket)
self.active_connections[client_id].discard(websocket)
async def send_personal_message(self, message, websocket):
await websocket.send(json.dumps(message))
async def broadcast(self, message):
for connection in self.active_connections['global']:
await connection.send(json.dumps(message))
async def broadcast_to_room(self, message, room):
for connection in self.rooms[room]:
await connection.send(json.dumps(message))
manager = ConnectionManager()
async def handler(websocket, path):
client_id = path.strip('/')
await manager.connect(websocket, client_id)
try:
async for message in websocket:
data = json.loads(message)
if data['type'] == 'broadcast':
await manager.broadcast(data['content'])
elif data['type'] == 'room':
await manager.broadcast_to_room(data['content'], data['room'])
else:
await manager.send_personal_message({'error': 'Unknown message type'}, websocket)
except websockets.exceptions.ConnectionClosed:
pass
finally:
manager.disconnect(websocket, client_id)
start_server = websockets.serve(handler, "0.0.0.0", 8080)
asyncio.get_event_loop().run_until_complete(start_server)
Health Checks
from aiohttp import web
async def health_check(request):
return web.json_response({
'status': 'healthy',
'connections': len(active_connections)
})
async def websocket_handler(request):
ws = web.WebSocketResponse()
await ws.prepare(request)
# Add to active connections
active_connections.add(ws)
try:
async for msg in ws:
if msg.type == web.WSMsgType.TEXT:
# Process message
await ws.send_str(json.dumps({'status': 'ok'}))
finally:
active_connections.discard(ws)
return ws
app = web.Application()
app.router.add_get('/health', health_check)
app.router.add_get('/ws', websocket_handler)
Security
Authentication
import jwt
from functools import wraps
async def authenticate_websocket(websocket, token):
try:
payload = jwt.decode(token, 'secret_key', algorithms=['HS256'])
return payload['user_id']
except jwt.InvalidTokenError:
await websocket.close()
return None
async def handler(websocket, path):
# Extract token from query string
token = websocket.query.get('token')
if not token:
await websocket.close(code=4001, message=b'Missing token')
return
user_id = await authenticate_websocket(websocket, token)
if not user_id:
await websocket.close(code=4002, message=b'Invalid token')
return
# Continue with authenticated connection
# ... rest of handler
Rate Limiting
import time
from collections import defaultdict
class RateLimiter:
def __init__(self, max_messages, window_seconds):
self.max_messages = max_messages
self.window_seconds = window_seconds
self.messages = defaultdict(list)
def is_allowed(self, client_id):
now = time.time()
# Remove old messages
self.messages[client_id] = [
t for t in self.messages[client_id]
if now - t < self.window_seconds
]
if len(self.messages[client_id]) >= self.max_messages:
return False
self.messages[client_id].append(now)
return True
limiter = RateLimiter(max_messages=100, window_seconds=60)
async def handler(websocket, path):
client_id = path.strip('/')
if not limiter.is_allowed(client_id):
await websocket.close(code=4003, message=b'Rate limit exceeded')
return
# Continue with handler
Scaling Considerations
Horizontal Scaling
# Redis adapter for Socket.IO
io = SocketIO(AsyncServer(
async_mode='aiohttp',
cors_allowed_origins='*',
message_queue='redis://redis-server:6379/0'
))
Sticky Sessions
# NGINX sticky sessions
upstream backend {
server backend1:8080;
server backend2:8080;
server backend3:8080;
# Enable sticky sessions
hash $remote_addr consistent;
}
server {
location /ws/ {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Timeouts
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
}
Best Practices
Performance
- Use SSL/TLS - Always encrypt WebSocket connections
- Set Appropriate Timeouts - Configure for your use case
- Enable Compression - Reduce bandwidth for large payloads
- Monitor Connections - Track active connections and messages
Reliability
- Implement Reconnection - Handle network interruptions
- Heartbeat/Keepalive - Detect dead connections
- Message Queues - Buffer messages during spikes
- Graceful Shutdown - Notify clients before server restart
Security
- Authenticate Connections - Validate tokens on connection
- Validate Messages - Sanitize all incoming data
- Limit Message Size - Prevent memory exhaustion
- Use WSS - Require secure WebSocket protocol
External Resources
- WebSocket Protocol RFC 6455
- NGINX WebSocket Documentation
- HAProxy WebSocket Guide
- MDN WebSocket Guide
Conclusion
WebSocket proxying is essential for building real-time applications at scale. This guide covered key patterns including proxy configuration, load balancing, client implementations, and security best practices.
Key takeaways:
- Configure timeouts appropriately for long-lived connections
- Implement reconnection logic on clients
- Use sticky sessions for stateful applications
- Monitor actively for reliability
By following these patterns, you can build robust, scalable real-time applications.
Comments