menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

LoRI: Redu...
source image

Arxiv

1w

read

29

img
dot

Image Credit: Arxiv

LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation

  • Low-Rank Adaptation (LoRA) has emerged as a popular parameter-efficient fine-tuning (PEFT) method for Large Language Models (LLMs), yet it still incurs notable overhead and suffers from parameter interference in multi-task scenarios.
  • LoRI with Reduced Interference (LoRI) is a simple yet effective approach that reduces the number of trainable parameters while maintaining strong task performance.
  • LoRI leverages orthogonality between adapter subspaces to minimize cross-task interference in adapter merging, and uses sparsity to mitigate catastrophic forgetting for continual learning.
  • Experiments across various tasks show that LoRI outperforms full fine-tuning and existing PEFT methods, while using significantly fewer trainable parameters.

Read Full Article

like

1 Like

For uninterrupted reading, download the app