DRDM is a novel algorithm introduced for distributed learning to address challenges in heterogeneous environments by combining distributionally robust optimization with dynamic regularization to minimize client drift.
DRDM optimizes a min-max objective function to maximize performance for the worst-case client, aiming to achieve fairness and robustness in model performance.
The algorithm leverages dynamic regularization and efficient local updates, reducing the number of communication rounds required for training.
Extensive experiments on benchmark datasets show that DRDM improves worst-case test accuracy and requires fewer communication rounds compared to existing state-of-the-art approaches.