Continual learning in large language models (LLMs) is prone to catastrophic forgetting.
A novel continual full fine-tuning approach leveraging adaptive singular value decomposition (SVD) is proposed.
The method identifies task-specific low-rank parameter subspaces and constrains updates to minimize interference without additional parameters or storing previous task gradients.
Empirically, the approach achieves state-of-the-art results, maintaining model capabilities and reducing forgetting to near-negligible levels.