The joint implementation of federated learning (FL) and explainable artificial intelligence (XAI) could allow training models from distributed data and explaining their inner workings while preserving privacy.
This scoping review examines publications that explore the interplay between FL and XAI, particularly focusing on model interpretability or post-hoc explanations.
Out of the 37 studies analyzed, only one quantitatively examined the impact of FL on model explanations, highlighting a significant research gap.
There is a need for more quantitative research and transparent practices to understand the mutual impact and conditions of FL and XAI integration.