The rapid development of large language models has led to the need for efficient fine-tuning methods to overcome computational limitations.
In response to this, a new framework called MAP has been proposed, aiming to improve the efficiency and interpretability of weight adaptation in pre-trained models.
MAP decouples weight adaptation into direction and magnitude components in a rigorous manner, allowing for more flexible and interpretable adaptation processes.
Experiments have shown that MAP can significantly enhance the performance of existing parameter-efficient fine-tuning methods, making it a valuable addition to the field.