Introduction
When choosing a file system for Linux production environments, ext4 and XFS dominate enterprise deployments. Both have proven themselves over years of production use, with each offering distinct advantages for different workloads.
ext4, the fourth extended file system, continues as the default for many distributions with its reliability and broad compatibility. XFS, originally from SGI’s IRIX and now in the Linux kernel, excels in high-performance scenarios with large files and high I/O throughput.
This comprehensive guide compares ext4 and XFS, covers administration tasks, performance optimization, and helps you choose the right file system for your workload.
ext4: The Standard Choice
Overview
ext4 is the evolution of the extended file system family, introducing modern features while maintaining backward compatibility with ext2 and ext3. It has been the default file system for most Linux distributions and powers countless production servers.
Key features:
- Journal checksumming
- Extents for efficient storage allocation
- Support for volumes up to 1 EB
- Delayed allocation for better performance
- Online defragmentation
- Persistent preallocation
Creating ext4 Filesystems
# Basic creation
sudo mkfs.ext4 /dev/sdb1
# With custom settings
sudo mkfs.ext4 -L storage -m 0 -T largefile4 /dev/sdb1
# Key mkfs options:
# -L volume label
# -m reserved blocks percentage (0 for data, 5 for boot)
# -T usage type (largefile, largefile4, news)
# -E extended options
# -b block size (1024, 2048, 4096)
Extended Options
# Extended options
sudo mkfs.ext4 -E stride=128,stripe-width=128 /dev/sdb1
# For RAID:
# stride = block_size * data_disks
# stripe_width = block_size * (data_disks * parity_disks)
# Verify options
sudo tune2fs -l /dev/sdb1
Mounting ext4
# Basic mount
sudo mount /dev/sdb1 /data
# With options
sudo mount -o defaults,noatime,nodiratime,errors=remount-ro /dev/sdb1 /data
# Common mount options:
# defaults - rw,suid,dev,exec,auto,nouser,async
# noatime - don't update access time
# nodiratime - don't update dir access time
# relatime - update atime if older than mtime/ctime
# barrier - enable journaling barriers
# data=journal|ordered|writeback
# commit=n - sync every n seconds
# errors=remount-ro|panic|continue
/etc/fstab Configuration
# /etc/fstab
UUID=12345678-1234-1234-1234-123456789abc /data ext4 defaults,noatime 0 2
LABEL=storage /mnt/storage ext4 noatime,nodiratime,errors=remount-ro 0 2
ext4 Administration
Checking filesystem:
# Check filesystem
sudo e2fsck -fv /dev/sdb1
# Safe check (non-destructive)
sudo e2fsck -n /dev/sdb1
# With journal replay
sudo e2fsck -fy /dev/sdb1
Tuning parameters:
# View current settings
sudo tune2fs -l /dev/sdb1
# Set reserved space (0% for data drives)
sudo tune2fs -m 0 /dev/sdb1
# Set max mount count
sudo tune2fs -c 30 /dev/sdb1
# Set check interval
sudo tune2fs -i 30d /dev/sdb1
# Enable/disable journal
sudo tune2fs -O ^has_journal /dev/sdb1
# Set label
sudo tune2fs -L newlabel /dev/sdb1
Online defragmentation:
# Install defrag tool
sudo apt install e2fsprogs
# Check fragmentation
sudo e4defrag -c /data
# Defragment file
sudo e4defrag /data/file.txt
# Defragment directory
sudo e4defrag /data
# Defragment entire filesystem
sudo e4defrag /dev/sdb1
XFS: High Performance Choice
Overview
XFS was developed by SGI for their IRIX operating system and ported to Linux in 2001. It excels in environments requiring high throughput for large files and concurrent I/O operations.
Key features:
- B+ tree indexing for directories
- Efficient allocation groups
- Scalable to 16TB+ filesystems
- Delayed allocation
- Space preallocation
- Quota management
- Extended attributes
Creating XFS Filesystems
# Basic creation
sudo mkfs.xfs /dev/sdb1
# With custom settings
sudo mkfs.xfs -L storage -d su=64k,sw=4 -i size=512 /dev/sdb1
# Key mkfs options:
# -L volume label
# -d data section options
# -i inode options
# -m metadata options
Optimized Creation for RAID
# For RAID-5/6 with 4K block, 8 data disks
sudo mkfs.xfs -d su=64k,sw=8 -r su=64k,ro=8 /dev/sdb1
# For RAID-10
sudo mkfs.xfs -d su=64k,sw=4,npctr=4 /dev/sdb1
# With large allocation groups
sudo mkfs.xfs -d agcount=32 /dev/sdb1
Mounting XFS
# Basic mount
sudo mount /dev/sdb1 /data
# With options
sudo mount -o noatime,nodiratime,logbufs=8,logdev=/dev/sdc1 /dev/sdb1 /data
# Common mount options:
# noatime, nodiratime
# noquota
# logbufs=value (2-8)
# logdev=device (external log)
# wsync
# allocsize=64K (buffered I/O size)
/etc/fstab Configuration
# /etc/fstab
UUID=12345678-1234-1234-1234-123456789abc /data xfs defaults,noatime,allocsize=64m 0 0
XFS Administration
Checking filesystem:
# Repair (unmounted)
sudo xfs_repair /dev/sdb1
# With log replay
sudo xfs_repair -L /dev/sdb1
# Check metadata
sudo xfs_db -r /dev/sdb1
> help
> super
> quit
Quota management:
# Enable quotas in /etc/fstab
# /data xfs usrquota,grpquota 0 0
# Mount with quotas
sudo mount -o usrquota,grpquota /dev/sdb1
# Set quotas
sudo xfs_quota -x -c 'limit bsoft=10g bhard=12g user1' /data
sudo xfs_quota -x -c 'limit isoft=1000 ihard=2000 user1' /data
# Report quotas
sudo xfs_quota -x -c 'report -h' /data
# Set project quotas
sudo xfs_quota -x -c 'project -s -p projectname 100' /data
Defragmentation:
# Check fragmentation
sudo xfs_db -r /dev/sdb1
> frag
> quit
# Defragment
sudo xfs_fsr /dev/sdb1
sudo xfs_fsr -v /data
Freeze/thaw for backups:
# Freeze filesystem
sudo xfs_freeze -f /data
# Unfreeze
sudo xfs_freeze -u /data
# For LVM snapshots
lvcreate -L10G -s -n snap /dev/vg0/data
Feature Comparison
ext4 vs XFS
| Feature | ext4 | XFS |
|---|---|---|
| Max Filesystem Size | 1 EB | 16 EB |
| Max File Size | 16 TB | 8 EB |
| Max File Count | 4 billion | Unlimited |
| Journal | Yes (checksummed) | Yes |
| Extents | Yes | Yes |
| Delayed Allocation | Yes | Yes |
| Online Resize | Yes | Yes |
| ACLs | Yes | Yes |
| Quotas | Yes (extended) | Yes (native) |
| Defragmentation | Yes (online) | Yes |
| Snapshot | Via LVM | Via LVM |
| Allocation | Block | Extent + delay |
| Large Dir Index | HTree | B+ Tree |
When to Choose ext4
- Desktop/laptop systems
- Small to medium databases
- General-purpose servers
- Boot filesystems
- Compatibility requirements
- Systems with limited RAM
- When maximum compatibility needed
When to Choose XFS
- Large files (>1GB)
- High I/O throughput workloads
- Media streaming
- Large databases
- Parallel I/O (multiple threads)
- Filesystems > 100TB
- When scalability is critical
Performance Tuning
ext4 Optimization
# Mount options for performance
# /etc/fstab
/dev/sdb1 /data ext4 noatime,nodiratime,errors=remount-ro,data=writeback 0 2
# Writeback mode (faster, less safe)
# journal mode (slower, safest)
# ordered mode (default, balanced)
# Tunable parameters
echo 10 > /proc/sys/vm/dirty_ratio
echo 5 > /proc/sys/vm/dirty_background_ratio
echo 3000 > /proc/sys/vm/dirty_writeback_centisecs
# For databases - use nobarrier with UPS
# /etc/fstab
/dev/sdb1 /data ext4 noatime,nodiratime,nobarrier,data=writeback 0 2
XFS Optimization
# Mount options for performance
# /etc/fstab
/dev/sdb1 /data xfs noatime,nodiratime,allocsize=64m,logbufs=8 0 0
# Allocation group tuning
# For large filesystems with many CPUs
sudo mkfs.xfs -d agcount=32 /dev/sdb1
# For parallel I/O workloads
sudo mkfs.xfs -d agcount=64 /dev/sdb1
# Inode size (for metadata-heavy workloads)
sudo mkfs.xfs -i size=512 /dev/sdb1
LVM Integration
Creating Logical Volumes
# Create physical volume
sudo pvcreate /dev/sdb /dev/sdc
# Create volume group
sudo vgcreate vg0 /dev/sdb /dev/sdc
# Create ext4 logical volume
sudo lvcreate -L 100G -n data vg0
sudo mkfs.ext4 /dev/vg0/data
# Create XFS logical volume
sudo lvcreate -L 100G -n data vg0
sudo mkfs.xfs /dev/vg0/data
Resizing
# Extend logical volume
sudo lvextend -L +50G /dev/vg0/data
# Resize filesystem (ext4)
sudo resize2fs /dev/vg0/data
# Resize filesystem (XFS - must grow only)
sudo xfs_growfs /data
# Shrink logical volume (ext4 only, unmounted)
sudo resize2fs /dev/vg0/data 100G
sudo lvreduce -L 100G /dev/vg0/data
Snapshots
# Create snapshot (for backup)
sudo lvcreate -L 20G -s -n data-snap /dev/vg0/data
# Mount snapshot
sudo mount /dev/vg0/data-snap /snap
# Remove snapshot
sudo lvremove /dev/vg0/data-snap
Monitoring and Maintenance
ext4 Monitoring
# Check disk usage
df -h /data
# Check inode usage
df -i /data
# View filesystem details
sudo dumpe2fs -h /dev/sdb1
# Monitor journal
sudo dumpe2fs -j /dev/sdb1
XFS Monitoring
# Check disk usage
df -h /data
# Detailed space info
sudo xfs_info /data
# Filesystem statistics
sudo xfs_db -r -c 'sb 0' -c 'p' /dev/sdb1
# Quota reporting
sudo xfs_quota -x -c 'df -h' /data
Health Checks
# ext4 check (unmounted or every 30 mounts)
sudo fsck.ext4 -fn /dev/sdb1
# XFS check (repair if needed)
sudo xfs_repair -n /dev/sdb1 # dry run
sudo xfs_repair /dev/sdb1 # actual repair
# SMART monitoring
sudo smartctl -a /dev/sdb
Backup Strategies
Filesystem Imaging
# Create image (with compression)
sudo dd if=/dev/sdb1 | gzip > backup.img.gz
# Create image with progress
sudo pv -p -r -n /dev/sdb1 | gzip > backup.img.gz
# Restore
sudo gzip -dc backup.img.gz | dd of=/dev/sdb1
Using rsync
# Incremental backup
sudo rsync -avh --progress /data/ /backup/data/
# With deletion (mirror)
sudo rsync -avh --delete --progress /data/ /backup/data/
# Network backup
sudo rsync -avz /data/ user@backup-server:/backup/data/
LVM Snapshots for Backups
# Create snapshot
sudo lvcreate -L 50G -s -n backup-snap /dev/vg0/data
# Mount and backup
sudo mount /dev/vg0/backup-snap /snap
sudo rsync -av /snap/ /backup/
# Remove snapshot
sudo umount /snap
sudo lvremove /dev/vg0/backup-snap
Troubleshooting
ext4 Issues
Filesystem full but df shows space:
# Check reserved blocks
sudo tune2fs -l /dev/sdb1 | grep Reserved
# Remove reserved blocks if data drive
sudo tune2fs -m 0 /dev/sdb1
Too many inodes:
# Check inode count
df -i /data
# Too many small files - recreate with more inodes
sudo mkfs.ext4 -N 100000000 /dev/sdb1
Journal recovery:
# Force journal replay
sudo e2fsck -fy /dev/sdb1
XFS Issues
Filesystem full:
# Check allocation groups
sudo xfs_db -r -c 'sb 0' -c 'p' /dev/sdb1 | grep ag
# Check free space per group
sudo xfs_db -r -c 'agf 0' -c 'p' /dev/sdb1 | grep free
Corrupted filesystem:
# Unmount and repair
sudo umount /data
sudo xfs_repair -L /dev/sdb1
sudo mount /dev/sdb1 /data
Stuck processes:
# Check for hung I/O
sudo xfs_db -r -c 'stack' /dev/sdb1
Migration Between Filesystems
Converting ext3 to ext4
# Unmount
sudo umount /dev/sdb1
# Enable ext4 features
sudo tune2fs -O extents,uninit_bg,dir_index /dev/sdb1
# Check filesystem
sudo e2fsck -fy /dev/sdb1
# Remount
sudo mount -t ext4 /dev/sdb1 /data
Migrating ext4 to XFS
# Backup data
sudo rsync -av /data/ /backup/
# Recreate as XFS
sudo umount /dev/sdb1
sudo mkfs.xfs -L data /dev/sdb1
# Restore
sudo mount /dev/sdb1 /data
sudo rsync -av /backup/ /data/
Best Practices
General
- Always use filesystem checks before major operations
- Monitor disk space with alerting
- Plan capacity with 20% headroom
- Use LVM for flexibility
- Test backup/restore procedures
- Enable SMART monitoring on disks
ext4
- Use for boot filesystems
- Reserve 5% for root on data drives
- Consider noatime for performance
- Use barriers for data integrity
- Regular defragmentation for heavy writes
XFS
- Choose for large files/filesystems
- Use for databases and media
- Allocate enough AGs for parallelism
- Consider external log for high I/O
- Use quota enforcement for multi-user
Conclusion
Both ext4 and XFS are mature, production-ready file systems that serve Linux environments well. ext4 provides maximum compatibility and simplicity, making it ideal for most use cases. XFS excels in high-throughput scenarios with large files and scalable requirements.
Your choice should consider workload characteristics, filesystem size, and performance requirements. Many deployments use both - ext4 for boot and system partitions, XFS for data and applications.
Comments