Federated Learning (FL) allows organizations to train machine learning models collectively without sharing raw data, but it can be vulnerable to attacks like Byzantine and backdoor attacks.
A new solution called FedTruth has been proposed to defend against malicious model updates in FL by estimating a 'ground-truth model update' without the need for a benign root dataset or assumptions on data distribution.
FedTruth considers contributions from all benign clients and employs dynamic aggregation weights to reduce the impact of poisoned model updates, making it effective against Byzantine and backdoor attacks in large-scale FL systems.
The proposed FedTruth solution aims to enhance the security of federated learning by addressing vulnerabilities to model poisoning attacks without relying on specific data assumptions or requiring access to a benign root dataset.