Federated Learning Attacks
Federated learning systems face unique security challenges due to their distributed nature and limited visibility into client data and behavior.
Model Poisoning
Malicious clients submit poisoned model updates to corrupt the global model
- • Targeted poisoning attacks
- • Backdoor injection
- • Model replacement attacks
Byzantine Attacks
Compromised nodes send arbitrary malicious updates to disrupt training
- • Random noise injection
- • Gradient flipping
- • Sybil attacks
Inference Attacks
Attackers infer private information from model updates or gradients
- • Membership inference
- • Property inference
- • Gradient inversion
Defense Mechanisms
Robust Aggregation
- • Krum and Multi-Krum algorithms
- • Trimmed mean aggregation
- • Median-based methods
- • FoolsGold defense
Privacy Protection
- • Differential privacy mechanisms
- • Secure multi-party computation
- • Homomorphic encryption
- • Gradient compression and noise