With the emerging application of Federated Learning (FL) in finance, hiring, and healthcare, fairness is crucial to prevent disparities across legally protected attributes like race or gender.
Global fairness addresses the disparity across the entire population, while local fairness focuses on the disparity within each client.
This paper introduces a framework that investigates the minimum accuracy lost for enforcing specified levels of global and local fairness in multi-class FL settings.
Experimental results show that the proposed algorithm outperforms the current state of the art in terms of accuracy-fairness tradeoffs, computational costs, and communication costs.