menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Identifyin...
source image

Arxiv

3d

read

365

img
dot

Image Credit: Arxiv

Identifying the Truth of Global Model: A Generic Solution to Defend Against Byzantine and Backdoor Attacks in Federated Learning (full version)

  • Federated Learning (FL) allows organizations to train machine learning models collectively without sharing raw data, but it can be vulnerable to attacks like Byzantine and backdoor attacks.
  • A new solution called FedTruth has been proposed to defend against malicious model updates in FL by estimating a 'ground-truth model update' without the need for a benign root dataset or assumptions on data distribution.
  • FedTruth considers contributions from all benign clients and employs dynamic aggregation weights to reduce the impact of poisoned model updates, making it effective against Byzantine and backdoor attacks in large-scale FL systems.
  • The proposed FedTruth solution aims to enhance the security of federated learning by addressing vulnerabilities to model poisoning attacks without relying on specific data assumptions or requiring access to a benign root dataset.

Read Full Article

like

21 Likes

For uninterrupted reading, download the app