Introduction
IPVS (IP Virtual Server) is a Linux kernel feature that provides Layer 4 (transport layer) load balancing directly in the kernel space. As part of the Linux Virtual Server (LVS) project, IPVS enables extremely high-performance, scalable load balancing for TCP and UDP services. In 2026, with the exponential growth of cloud-native applications and the need for ultra-low latency, IPVS remains the technology of choice for organizations requiring the highest level of throughput and efficiency.
This comprehensive guide covers IPVS architecture, configuration using ipvsadm, integration with keepalived for high availability, advanced scheduling algorithms, and production deployment strategies. Whether you’re building a load balancer for a high-traffic website or designing infrastructure for a microservices architecture, this article provides the knowledge needed to implement IPVS effectively.
What is IPVS?
IPVS is a kernel-based load balancer that operates at Layer 4 of the OSI model. Unlike user-space load balancers, IPVS processes packets in the kernel, resulting in minimal overhead and exceptional performance.
Key Characteristics
Kernel-Level Processing: Packets are handled in the Linux kernel, minimizing context switches and latency.
Protocol Support: Supports TCP, UDP, SCTP, and other protocols.
Health Checking: Built-in connection monitoring and automatic server failover.
Scheduling Algorithms: Multiple algorithms including round robin, least connections, and weighted distribution.
NAT and DR Modes: Supports both Network Address Translation and Direct Routing modes.
IPVS vs User-Space Load Balancers
| Feature | IPVS | User-Space (HAProxy) |
|---|---|---|
| Performance | Kernel-level, extremely fast | User-space, moderate overhead |
| Latency | Sub-microsecond | Higher due to context switches |
| Features | L4 only | L4 and L7 |
| Protocol | TCP/UDP/SCTP | HTTP, TCP, UDP |
| Complexity | Requires kernel modules | Easier to configure |
| SSL Termination | No | Yes |
Architecture
How IPVS Works
Client โ Virtual IP (VIP) โ IPVS โ Real Server (RS1, RS2, RS3)
Virtual IP (VIP): The IP address clients connect to.
Real Servers (RS): Backend servers that actually handle requests.
Director: The IPVS load balancer machine.
Connection Handling
- Client connects to VIP
- IPVS intercepts the connection in kernel space
- Real server selected based on scheduling algorithm
- Connection table updated
- Packets forwarded to selected real server
- Response routed back through IPVS (in NAT mode) or directly to client (in DR mode)
Modes
NAT Mode (Network Address Translation):
- Director modifies both source and destination IP addresses
- Real servers must route traffic back through director
- Simple setup, limited by director bandwidth
Direct Routing (DR):
- Director only modifies MAC address
- Real servers must have VIP configured on loopback
- Much higher throughput, no bottleneck
IP Tunneling:
- Director encapsulates packets and tunnels to real servers
- Real servers can be on different networks
- More complex setup
Installation
Check IPVS Support
# Check if IPVS modules are loaded
lsmod | grep ip_vs
# Load IPVS modules
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_lc
modprobe ip_vs_wlc
modprobe ip_vs_sh
modprobe ip_vs_ovf
# Make modules persistent
echo "ip_vs ip_vs_rr ip_vs_wrr ip_vs_lc ip_vs_wlc" >> /etc/modules
Install ipvsadm
# Ubuntu/Debian
sudo apt install ipvsadm
# RHEL/CentOS
sudo dnf install ipvsadm
# Verify installation
ipvsadm -L -n
Basic Configuration
Add Virtual Service
# Add TCP virtual service on port 80
ipvsadm -A -t 192.168.1.100:80 -s rr
# Add UDP virtual service on port 53
ipvsadm -A -u 192.168.1.100:53 -s rr
# Add with scheduling algorithm
ipvsadm -A -t 192.168.1.100:80 -s wlc
# Scheduling algorithms:
# rr - Round Robin
# wrr - Weighted Round Robin
# lc - Least Connections
# wlc - Weighted Least Connections
# sh - Source Hashing
# dh - Destination Hashing
# sed - Shortest Expected Delay
# nq - Never Queue
Add Real Servers
# Add real server (gate mode - direct routing)
ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.10 -g
ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.11 -g
ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.12 -g
# Add real server (masq mode - NAT)
ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.10 -m
ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.11 -m
# Add with weight
ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.10 -m -w 2
ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.11 -m -w 1
View Configuration
# List virtual services
ipvsadm -L -n
# Detailed output
ipvsadm -L -n --stats
# Show connection information
ipvsadm -L -n --conns
# Show rate information
ipvsadm -L -n --rate
Example output:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.1.100:80 rr
-> 192.168.1.10:80 Masq 2 150 10
-> 192.168.1.11:80 Masq 1 80 5
Advanced Configuration
Persistence Connections
# Add persistence (600 seconds)
ipvsadm -A -t 192.168.1.100:80 -s rr -p 600
# With port persistence
ipvsadm -A -t 192.168.1.100:0 -s rr -p 3600 -p 0
# View persistence
ipvsadm -L -n -p
Connection Timeout
# Set TCP timeout
ipvsadm --set 900 120 300
# Format: tcp_timeout fin_timeout udp_timeout
# Default timeouts:
# TCP: 900 seconds
# TCP FIN: 120 seconds
# UDP: 300 seconds
Firewall Mark
# Mark packets for load balancing
iptables -A PREROUTING -t mangle -d 192.168.1.100 -p tcp --dport 80 -j MARK --set-mark 1
iptables -A PREROUTING -t mangle -d 192.168.1.100 -p tcp --dport 443 -j MARK --set-mark 1
# Create virtual service using firewall mark
ipvsadm -A -f 1 -s rr
ipvsadm -a -f 1 -r 192.168.1.10 -g
ipvsadm -a -f 1 -r 192.168.1.11 -g
High Availability with Keepalived
Keepalived provides VRRP for IPVS failover, creating a highly available load balancer pair.
Installation
# Ubuntu/Debian
sudo apt install keepalived
# RHEL/CentOS
sudo dnf install keepalived
Keepalived Configuration
# /etc/keepalived/keepalived.conf
# Global configuration
global_defs {
router_id lb1
vrrp_skip_check_adv_addr
vrrp_garp_interval 0
vrrp_gna_interval 0
}
# VRRP instance
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
# Priority: MASTER=100, BACKUP=90
priority 100
authentication {
auth_type PASS
auth_pass secret123
}
virtual_ipaddress {
192.168.1.100 dev eth0
}
# Track interface
track_interface {
eth0
}
}
# Virtual server group
virtual_server_group web {
192.168.1.100 80
192.168.1.100 443
}
# Virtual server
virtual_server group web {
delay_loop 6
lb_algo rr
lb_kind DR
nat_mask 255.255.255.0
# Health checking
sorry_server 192.168.1.200 80
real_server 192.168.1.10 80 {
weight 1
HTTP_GET {
url {
path /health
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.1.11 80 {
weight 1
HTTP_GET {
url {
path /health
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
Backup Keepalived Configuration
# /etc/keepalived/keepalived.conf on backup
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 90
authentication {
auth_type PASS
auth_pass secret123
}
virtual_ipaddress {
192.168.1.100 dev eth0
}
}
Health Check Methods
TCP Check:
real_server 192.168.1.10 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
}
}
HTTP Check:
real_server 192.168.1.10 80 {
weight 1
HTTP_GET {
url {
path /health
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
SSL Health Check:
real_server 192.168.1.10 443 {
weight 1
SSL_GET {
url {
path /health
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
Direct Routing Setup
Real Server Configuration
On each real server, configure the loopback interface:
# Add VIP to loopback (non-persistent)
ip addr add 192.168.1.100/32 dev lo
# Make persistent
# /etc/sysconfig/network-scripts/ifcfg-lo:1
DEVICE=lo:1
IPADDR=192.168.1.100
NETMASK=255.255.255.255
ONBOOT=yes
NAME=loopback-vip
ARP Suppression
Prevent real servers from responding to ARP for VIP:
# Disable ARP on loopback
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
# Disable ARP on primary interface
echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce
# Make persistent
# /etc/sysctl.conf
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.eth0.arp_announce = 2
NAT Mode Setup
Director Configuration
# Enable IP forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward
# Make persistent
# /etc/sysctl.conf
net.ipv4.ip_forward = 1
# Configure IPVS NAT
ipvsadm -A -t 192.168.1.100:80 -s rr
ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.10 -m
ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.11 -m
Real Server Configuration
Configure real servers with the director as their default gateway:
# On each real server
ip route add default via 192.168.1.1
# where 192.168.1.1 is the director's internal IP
Scheduling Algorithms
Round Robin (rr)
ipvsadm -A -t 192.168.1.100:80 -s rr
Distributes requests sequentially across all servers.
Weighted Round Robin (wrr)
ipvsadm -A -t 192.168.1.100:80 -s wrr
ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.10 -g -w 3
ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.11 -g -w 1
Server with weight 3 gets 3 requests for every 1 to weight 1 server.
Least Connections (lc)
ipvsadm -A -t 192.168.1.100:80 -s lc
Routes to server with fewest active connections.
Weighted Least Connections (wlc)
ipvsadm -A -t 192.168.1.100:80 -s wlc
Routes based on (active_connections * 256 + inactive_connections) / weight.
Source Hashing (sh)
ipvsadm -A -t 192.168.1.100:80 -s sh
Maintains session affinity based on source IP hash.
Destination Hashing (dh)
ipvsadm -A -t 192.168.1.100:80 -s dh
Hashes destination IP for cache servers.
Monitoring and Troubleshooting
View Statistics
# Show statistics
ipvsadm -L -n --stats
# Show rate
ipvsadm -L -n --rate
# Show connection table
ipvsadm -L -c -n
Logging
# Enable IPVS logging
# /etc/rsyslog.conf
kern.warning /var/log/ipvs.log
# View logs
tail -f /var/log/ipvs.log
Troubleshooting
# Check if modules are loaded
lsmod | grep ip_vs
# Verify IPVS rules
ipvsadm -L -n
# Check real server connectivity
ping 192.168.1.10
# Check service on real servers
curl http://192.168.1.10/health
# Check NAT translation
ipvsadm -L -n -c | head -20
# Debug keepalived
keepalived -D -l -d
Performance Tuning
Kernel Parameters
# /etc/sysctl.conf
# IPVS tuning
net.ipv4.vs.conntrack = 1
net.ipv4.vs.expire_nodest_conn = 1
net.ipv4.vs.expire_quiescent_template = 1
# Network tuning
net.core.netdev_max_backlog = 250000
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_fastopen = 3
net.ipv4.tcp_max_syn_backlog = 8192
# Disable connection tracking for IPVS
net.netfilter.nf_conntrack_max = 1048576
Connection Table Size
# Set connection hash table size
ipvsadm --set 900 120 300
# Hash table size (automatic in recent kernels)
# Check: ipvsadm -L -n | grep "IP Virtual Server"
# Shows size=4096 or similar
Scripts and Automation
Save/Restore Configuration
# Save current configuration
ipvsadm -S -n > /etc/sysconfig/ipvsadm
# Restore from file
ipvsadm -R < /etc/sysconfig/ipvsadm
# Or use systemd
systemctl enable ipvsadm
systemctl start ipvsadm
Script: Check and Restart
#!/bin/bash
# /usr/local/bin/ipvs-check.sh
LOGFILE=/var/log/ipvs-health.log
check_services() {
for VIP in 192.168.1.100:80 192.168.1.100:443; do
if ! ipvsadm -L -n | grep -q "$VIP"; then
echo "$(date): Virtual service $VIP is DOWN!" >> $LOGFILE
systemctl restart ipvsadm
return 1
fi
done
echo "$(date): All services UP" >> $LOGFILE
}
check_services
Best Practices
Security
- Use firewall rules to restrict access to VIP
- Enable logging for suspicious activity
- Keep kernel and ipvsadm updated
- Use VRRP authentication
- Consider IPsec for real server communication
Reliability
- Deploy IPVS in HA pair with keepalived
- Configure health checks on all real servers
- Use sorry server for maintenance pages
- Monitor connection counts and server health
- Set appropriate timeouts
Performance
- Use Direct Routing mode for maximum throughput
- Tune kernel parameters for high load
- Monitor and adjust connection table size
- Use appropriate scheduling algorithm
- Consider connection limits per real server
Monitoring
- Track active connections
- Monitor real server health
- Set up alerts for service failures
- Monitor bandwidth usage
- Track connection rate
Conclusion
IPVS provides the foundation for ultra-high-performance load balancing in Linux environments. Its kernel-level processing, minimal overhead, and tight integration with keepalived make it the technology of choice for demanding workloads requiring the lowest possible latency and highest throughput.
While IPVS lacks some of the Layer 7 features of user-space load balancers, its performance advantages make it ideal as the front-line load balancer, with L7 balancers handling application-level routing behind it. By understanding IPVS architecture, configuration options, and integration with keepalived, engineers can build robust, scalable load balancing infrastructure that meets the most demanding performance requirements.
Comments