Poisoning Attacks in Federated Learning: Detection Guide
Poisoning attacks are the biggest threat to federated learning systems. Malicious participants can corrupt shared models through targeted data poisoning or model update manipulation. This guide covers attack vectors, detection algorithms, and Byzantine-resilient aggregation methods.
