Decentralized learning allows training of deep learning algorithms without centralizing datasets, improving data privacy and operational efficiency.
Data imbalances in distributed learning, especially in medical fields, pose challenges due to different patient populations and data collection practices.
The paper proposes two algorithms, DSWM and ASWM, for setting weights of each node's contribution in the global model.
The ASWM algorithm significantly improves the performance of underrepresented nodes by 2.713% in AUC, while nodes with larger datasets experience only a modest decrease of 0.441%.