Large-scale foundation models have shown versatility in various tasks but fine-tuning them completely is computationally expensive.
A Parameter-Efficient Fine-Tuning (PEFT) method called LoRA introduces low-rank updates to pre-trained weights to reduce computational costs.
Through singular value decomposition (SVD), it was found that during fine-tuning, top singular values are amplified while the rest remain mostly unchanged, injecting task-specific knowledge into a low-dimensional subspace.
A novel method leveraging learnable rescaling of top singular directions has been proposed for precise modulation of influential components, leading to consistent improvements across multiple tasks.