Skip to main content
โšก Calmops

Federated Learning: Privacy-Preserving Machine Learning 2026

Introduction

Machine learning traditionally required centralized data collection, raising significant privacy concerns. Federated learning (FL) transforms this paradigm by training models across decentralized data sources without moving the data itself. Originally pioneered by Google for keyboard prediction, federated learning has evolved into a fundamental approach for privacy-preserving AI in healthcare, finance, and edge devices.

In 2026, federated learning has matured from research prototypes to production systems powering applications from smartphone keyboards to hospital networks. This guide covers federated learning fundamentals, implementation patterns, and real-world applications.

Understanding Federated Learning

Core Concept

graph TB
    subgraph "Traditional ML"
        A[Devices] -->|Send Data| B[Central Server]
        B -->|Train Model| C[Global Model]
    end
    
    subgraph "Federated Learning"
        D[Device 1] -->|Local Training| E[Local Model 1]
        F[Device 2] -->|Local Training| G[Local Model 2]
        H[Device N] -->|Local Training| I[Local Model N]
        E -->|Only Weights| J[Aggregation Server]
        G -->|Only Weights| J
        I -->|Only Weights| J
        J -->|Updated Global Model| D
    end

How Federated Learning Works

class FederatedLearning:
    """
    Basic federated learning workflow.
    """
    
    def __init__(self, model, aggregation_fn='fedavg'):
        self.global_model = model
        self.aggregation_fn = aggregation_fn
    
    def federated_averaging(self, client_weights, client_counts):
        """
        FedAvg: Weighted average of client model weights.
        """
        total_samples = sum(client_counts)
        
        aggregated = {}
        for key in client_weights[0].keys():
            weighted_sum = sum(
                cw[key] * (cnt / total_samples) 
                for cw, cnt in zip(client_weights, client_counts)
            )
            aggregated[key] = weighted_sum
        
        return aggregated
    
    def round(self, selected_clients):
        """
        One round of federated learning.
        """
        # 1. Send global model to selected clients
        local_updates = []
        for client in selected_clients:
            # Client trains locally
            local_model = self.global_model.copy()
            local_model.train(client.local_data)
            
            # Only send back weights, not data
            local_updates.append({
                'weights': local_model.get_weights(),
                'samples': len(client.local_data)
            })
        
        # 2. Aggregate updates
        client_weights = [u['weights'] for u in local_updates]
        client_counts = [u['samples'] for u in local_updates]
        
        new_weights = self.federated_averaging(client_weights, client_counts)
        self.global_model.set_weights(new_weights)
        
        return len(selected_clients)

Federated Learning Architectures

1. Horizontal Federated Learning

Data has same features, different samples:

class HorizontalFL:
    """
    Horizontal federated learning for same-feature datasets.
    Example: Multiple hospitals with patient records
    """
    
    def setup(self):
        """
        Each party has:
        - Same feature space
        - Different sample space
        """
        return {
            'parties': [
                'Hospital A (10k patients)',
                'Hospital B (15k patients)',
                'Hospital C (8k patients)'
            ],
            'features': ['age', 'blood_pressure', 'heart_rate', 'symptoms'],
            'challenge': 'Non-IID data distribution'
        }
    
    def handle_non_iid(self):
        """
        Strategies for non-identically distributed data.
        """
        return {
            'fedprox': 'Add proximal term to handle heterogeneity',
            'silo': 'Fine-tune last layers locally',
            'personalization': 'Local adaptation after global training'
        }

2. Vertical Federated Learning

Data has different features, same samples:

class VerticalFL:
    """
    Vertical federated learning for complementary features.
    Example: Bank + E-commerce sharing user insights
    """
    
    def __init__(self):
        self.party_a_features = ['income', 'credit_score', 'employment']
        self.party_b_features = ['browsing_history', 'purchase_frequency']
        self.party_c_features = ['location', 'demographics']
    
    def secure_aggregation(self):
        """
        Aggregate gradients without revealing individual features.
        """
        return {
            'homomorphic_encryption': 'Encrypt gradients before aggregation',
            'secret_sharing': 'Split parameters across parties',
            'trusted_execution_environment': 'Secure computation environment'
        }

3. Transfer Learning

class TransferFederated:
    """
    Combine federated learning with transfer learning.
    """
    
    def apply(self, source_domain, target_domains):
        """
        Pre-train on source, fine-tune on federated targets.
        """
        # Pre-training phase
        pretrained = train_on_source(source_domain)
        
        # Federated fine-tuning
        for round in range(num_rounds):
            selected = select_clients(target_domains)
            
            for client in selected:
                # Fine-tune last layers locally
                local_model = fine_tune_last_layers(
                    pretrained, 
                    client.data
                )
                updates.append(local_model.get_layer_weights('head'))
            
            # Aggregate only head layer
            global_head = aggregate(updates)
            pretrained.set_layer_weights('head', global_head)
        
        return pretrained

Implementation Frameworks

PySyft and PyGrid

import syft as sy
import torch
import torch.nn as nn

class FederatedClient:
    """
    PySyft federated learning client.
    """
    
    def __init__(self, node_id, data, labels):
        self.node_id = node_id
        self.data = data
        self.labels = labels
        
        # Initialize model
        self.model = nn.Sequential(
            nn.Linear(784, 256),
            nn.ReLU(),
            nn.Linear(256, 10)
        )
    
    def train_local(self, epochs=5):
        """
        Train model on local data.
        """
        optimizer = torch.optim.Adam(self.model.parameters())
        criterion = nn.CrossEntropyLoss()
        
        for epoch in range(epochs):
            for batch_x, batch_y in self.data, self.labels:
                optimizer.zero_grad()
                output = self.model(batch_x)
                loss = criterion(output, batch_y)
                loss.backward()
                optimizer.step()
        
        return self.model.state_dict()
    
    def send_model_to_gateway(self, gateway):
        """
        Send model to federated learning gateway.
        """
        self.model.send(gateway)
        return "Model sent successfully"


class FederatedServer:
    """
    PySyft federated learning server.
    """
    
    def __init__(self):
        self.models = {}
        self.client_counts = {}
    
    def receive_model_update(self, client_id, model_state, num_samples):
        """
        Receive model update from client.
        """
        self.models[client_id] = model_state
        self.client_counts[client_id] = num_samples
    
    def aggregate_fedavg(self):
        """
        FedAvg aggregation strategy.
        """
        total_samples = sum(self.client_counts.values())
        
        aggregated = {}
        for key in self.models[0].keys():
            weighted_sum = sum(
                self.models[cid][key] * (cnt / total_samples)
                for cid, cnt in self.client_counts.items()
            )
            aggregated[key] = weighted_sum
        
        return aggregated

Flower Framework

# Flower FL setup
pip install flwr

# Server implementation
import flwr as fl
from flwr.server.strategy import FedAvg

# Define strategy
strategy = FedAvg(
    fraction_fit=0.1,  # 10% of clients per round
    min_fit_clients=5,
    min_available_clients=10
)

# Start server
fl.server.start_server(
    server_address="0.0.0.0:8080",
    strategy=strategy,
    num_rounds=100
)
# Client implementation
import flwr as fl
import torch
import torch.nn as nn
from torch.utils.data import DataLoader

class FlowerClient(fl.client.NumPyClient):
    def __init__(self, model, train_loader, test_loader):
        self.model = model
        self.train_loader = train_loader
        self.test_loader = test_loader
    
    def get_parameters(self):
        return [val.cpu().numpy() for _, val in self.model.state_dict().items()]
    
    def set_parameters(self, parameters):
        params_dict = zip(self.model.state_dict().keys(), parameters)
        state_dict = {k: torch.tensor(v) for k, v in params_dict}
        self.model.load_state_dict(state_dict)
    
    def fit(self, parameters, config):
        self.set_parameters(parameters)
        train(self.model, self.train_loader, epochs=5)
        return self.get_parameters(), len(self.train_loader.dataset), {}
    
    def evaluate(self, parameters, config):
        self.set_parameters(parameters)
        loss, accuracy = evaluate(self.model, self.test_loader)
        return loss, len(self.test_loader.dataset), {"accuracy": accuracy}

# Start client
fl.client.start_numpy_client(
    server_address="localhost:8080",
    client=FlowerClient(model, train_loader, test_loader)
)

Privacy and Security

Differential Privacy

class DifferentialPrivacy:
    """
    Add noise to gradients for differential privacy.
    """
    
    def __init__(self, epsilon=1.0, delta=1e-5, max_grad_norm=1.0):
        self.epsilon = epsilon
        self.delta = delta
        self.max_grad_norm = max_grad_norm
    
    def clip_gradients(self, gradients):
        """
        Clip gradients to maximum norm.
        """
        total_norm = torch.sqrt(sum(g.norm() ** 2 for g in gradients))
        clip_coef = self.max_grad_norm / (total_norm + 1e-6)
        
        if clip_coef < 1:
            gradients = [g * clip_coef for g in gradients]
        
        return gradients
    
    def add_noise(self, gradients):
        """
        Add Gaussian noise for differential privacy.
        """
        sigma = self.calculate_sigma()
        
        noisy_grads = []
        for g in gradients:
            noise = torch.normal(0, sigma, g.shape)
            noisy_grads.append(g + noise)
        
        return noisy_grads
    
    def calculate_sigma(self):
        """
        Calculate noise scale based on privacy budget.
        """
        # Based on DP-SGD formula
        return self.max_grad_norm * self.epsilon

Secure Aggregation

class SecureAggregation:
    """
    Secure aggregation using secret sharing.
    """
    
    def __init__(self, threshold=3, num_parties=5):
        self.threshold = threshold
        self.num_parties = num_parties
    
    def share_secret(self, value, party_ids):
        """
        Split value into shares for parties.
        """
        import random
        
        shares = []
        for i in range(self.threshold - 1):
            shares.append(random.randint(0, value))
        
        # Last share ensures sum equals original value
        last_share = value - sum(shares)
        shares.append(last_share)
        
        return {pid: s for pid, s in zip(party_ids, shares)}
    
    def aggregate_shares(self, shares):
        """
        Reconstruct aggregate without seeing individual values.
        """
        # Each party adds its share to aggregate
        return sum(shares.values())
    
    def masked_model_update(self, original_update, mask):
        """
        Mask update before sending to server.
        """
        # Client adds random mask
        masked = original_update + mask
        
        # Server aggregates masks and reveals only aggregate
        aggregate_mask = sum(all_masks.values())
        
        # True aggregate = masked_aggregate - aggregate_mask
        true_aggregate = aggregate - aggregate_mask
        
        return true_aggregate

Real-World Applications

1. Healthcare

class HealthcareFL:
    """
    Federated learning for medical imaging.
    """
    
    def __init__(self, hospitals):
        self.hospitals = hospitals  # List of hospital data sources
    
    def train_xray_model(self):
        """
        Train pneumonia detection model across hospitals.
        """
        # Each hospital trains locally
        local_models = []
        for hospital in self.hospitals:
            model = load_pretrained('resnet50')
            model = train_local(model, hospital.xray_data)
            local_models.append(model)
        
        # Aggregate to global model
        global_model = aggregate_fedavg(local_models)
        
        return global_model
    
    def benefits(self):
        """
        Why federated for healthcare.
        """
        return {
            'patient_privacy': 'Data never leaves hospital',
            'regulatory': 'HIPAA compliant by design',
            'data_scale': 'Leverage thousands of cases',
            'collaboration': 'Multiple institutions can collaborate'
        }

2. Financial Services

class FinancialFL:
    """
    Federated learning for fraud detection.
    """
    
    def __init__(self, banks):
        self.banks = banks
    
    def train_fraud_model(self):
        """
        Train fraud detection across banks without sharing transactions.
        """
        return {
            'use_case': 'Detect new fraud patterns',
            'challenge': 'Highly sensitive financial data',
            'benefit': 'Global fraud intelligence without data sharing',
            'results': '30% improvement in fraud detection'
        }

3. Edge Devices

class MobileFL:
    """
    Federated learning for mobile devices.
    """
    
    def __init__(self):
        self.device_model = None
    
    def next_word_prediction(self):
        """
        Google Keyboard-style next word prediction.
        """
        return {
            'data': 'User typing patterns',
            'local_training': 'Train on device overnight',
            'aggregation': 'Weekly model updates',
            'privacy': 'Typing history never leaves device'
        }
    
    def on_device_training(self):
        """
        Implement on-device training with TensorFlow Lite.
        """
        # Convert model to TFLite
        interpreter = tf.lite.Interpreter(model_path='model.tflite')
        
        # Quantize for efficiency
        quantizer = tf.lite.QuantizeWeights()
        quantized = quantizer.quantize(model)
        
        return quantized

Advanced Techniques

1. Personalized Federated Learning

class PersonalizedFL:
    """
    Personalization in federated learning.
    """
    
    def fedavg_plus(self, global_model, local_data):
        """
        Fine-tune global model locally.
        """
        # Start with global model
        local_model = copy.deepcopy(global_model)
        
        # Few epochs of local training
        local_model.train(local_data, epochs=5)
        
        return local_model
    
    def meta_learning(self, global_model, support_sets):
        """
        MAML-based personalization.
        """
        # Train on diverse support sets
        meta_model = train_maml(global_model, support_sets)
        
        # Fast adaptation to new clients
        personalized = adapt_maml(meta_model, client_data)
        
        return personalized

2. Asynchronous FL

class AsyncFL:
    """
    Asynchronous federated learning.
    """
    
    def __init__(self, staleness_threshold=):
        self.staleness_threshold =3 staleness_threshold
        self.global_version = 0
        self.pending_updates = []
    
    def async_aggregate(self, client_update):
        """
        Aggregate updates asynchronously.
        """
        staleness = self.global_version - client_update.version
        
        if staleness > self.staleness_threshold:
            # Drop stale updates
            return None
        
        # Apply staleness-aware weighting
        weight = self.get_staleness_weight(staleness)
        
        return self.accumulate(client_update, weight)

Challenges and Solutions

Communication Efficiency

Challenge Solution
Large model sizes Model compression, quantization
Limited bandwidth Gradient sparsification
Slow connections Asynchronous aggregation

Statistical Heterogeneity

Challenge Solution
Non-IID data FedProx, MOCHA
Unbalanced data Weighted aggregation
Client drift Momentum, adaptive learning rate

Tools and Frameworks

Framework Language Use Case
Flower Python General FL
PySyft Python Research
TensorFlow Federated Python Production
PyTorch FL Python Research
NVIDIA FLARE Python Healthcare
FATE Python Enterprise
  1. Cross-Silo FL: Enterprise collaboration
  2. Vertical FL: Feature sharing across organizations
  3. Differential Privacy: Stricter privacy guarantees
  4. Quantum FL: Quantum-secured aggregation
  5. Edge-Cloud Hybrid: Adaptive computation

Resources

Conclusion

Federated learning represents a fundamental shift in how we approach machine learning, enabling AI systems that learn from distributed data while respecting privacy. In 2026, federated learning has moved beyond research into production deployments across healthcare, finance, and mobile devices.

Organizations handling sensitive data should evaluate federated learning as a pathway to collaborative AI without compromising privacy. The combination of privacy guarantees, regulatory compliance, and improved model performance makes FL an essential technology for the modern AI landscape.

Comments