menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Distributi...
source image

Arxiv

2w

read

189

img
dot

Image Credit: Arxiv

Distributionally Robust Federated Learning with Client Drift Minimization

  • DRDM is a novel algorithm introduced for distributed learning to address challenges in heterogeneous environments by combining distributionally robust optimization with dynamic regularization to minimize client drift.
  • DRDM optimizes a min-max objective function to maximize performance for the worst-case client, aiming to achieve fairness and robustness in model performance.
  • The algorithm leverages dynamic regularization and efficient local updates, reducing the number of communication rounds required for training.
  • Extensive experiments on benchmark datasets show that DRDM improves worst-case test accuracy and requires fewer communication rounds compared to existing state-of-the-art approaches.

Read Full Article

like

11 Likes

For uninterrupted reading, download the app