Federated learning involves training a global model from local models, but heterogeneous data distributions and data privacy pose challenges.
This study focuses on improving the adaptability of local models to enhance the performance of the global model.
The method introduces the concept of adaptability of local models and proposes a solution to optimize it without direct knowledge of other clients' data distributions.
Experimental results show that this approach boosts the adaptability of local models, leading to better overall performance compared to baseline methods.