Low-Rank Adaptation (LoRA) is a parameter-efficient method for fine-tuning large models, addressing shortcomings in existing initialization methods like 'Noise & Zeros'.
Update magnitude plays a crucial role in determining LoRA performance, leading to the proposal of a new 'Basis & Basis' initialization scheme called LoRAM, which matches spectral methods' effectiveness without their computational overhead.
The research highlights the significance of update magnitudes in low-rank structures and suggests optimization mechanisms like learning rate tuning, scaling factor adjustments, and initialization techniques to regulate magnitudes for better convergence.
Extensive experiments support the efficacy of LoRAM as a competitive alternative to spectral initialization, showcasing its efficiency and performance across various benchmarks.