A new method called CE-LoRA (communication-efficient federated LoRA adaptation) has been introduced to address challenges in fine-tuning pre-trained foundation models in federated learning.
CE-LoRA utilizes a tri-factorization low-rank adaptation approach with personalized model parameter aggregation.
By introducing a small-size dense matrix and considering client similarity, CE-LoRA reduces communication cost and achieves comparable empirical performance.
Experiments show that CE-LoRA significantly reduces communication overhead, improves performance under non-iid data conditions, and enhances data privacy protection.