LoRA is a promising fine-tuning technique for federated learning, reducing communication and computation costs at resource-constrained clients.
Data heterogeneity poses a challenge for LoRA-based FL, and conventional aggregation strategies like FedAvg have issues with slow convergence and accuracy.
FedRPCA proposes aggregating client LoRA parameters using scaled averaging and decomposing client updates via Robust Principal Component Analysis (Robust-PCA) to address common knowledge and client-specific knowledge effectively.
Evaluation shows that FedRPCA achieves higher final accuracy and faster convergence across vision and language tasks compared to other baselines.