Federated learning (FL) is a popular method that allows training deep learning models on decentralized datasets held by different clients.
Personalized FL (pFL) methods aim to address challenges related to statistical heterogeneity between distributed datasets by combining global learning and local modeling specific to each client.
A study explores the effectiveness of adaptive Maximum Mean Discrepancy (MMD) measures in the Ditto framework, a leading technique in pFL, which enhances model performance particularly in tasks with diverse feature characteristics.
The findings suggest that these adaptive MMD measures can benefit various pFL scenarios and advocate for constraints customized to the types of heterogeneity present in FL systems.