Federated Learning (FL) can be affected by data and device heterogeneities, caused by clients' different local data distributions and latencies in uploading model updates.
Traditional FL schemes consider these heterogeneities as separate aspects, but in practical scenarios, they are intertwined.
A new FL framework is presented in this paper, which converts stale model updates into unstale ones to tackle intertwined heterogeneities.
The approach estimates the distributions of clients' local training data from stale model updates and improves model accuracy by up to 25%.