Low-Rank Adaptation (LoRA) has emerged as a popular parameter-efficient fine-tuning (PEFT) method for Large Language Models (LLMs), yet it still incurs notable overhead and suffers from parameter interference in multi-task scenarios.
LoRI with Reduced Interference (LoRI) is a simple yet effective approach that reduces the number of trainable parameters while maintaining strong task performance.
LoRI leverages orthogonality between adapter subspaces to minimize cross-task interference in adapter merging, and uses sparsity to mitigate catastrophic forgetting for continual learning.
Experiments across various tasks show that LoRI outperforms full fine-tuning and existing PEFT methods, while using significantly fewer trainable parameters.