Large language models (LLMs) face issues of catastrophic forgetting in continual learning scenarios, where past learning is degraded while training on new tasks.
A new approach called OA-Adapter is proposed to address the limitations of fixed budget allocation and decoupled optimization and budget allocation methods in LLMs.
OA-Adapter unifies dynamic budget adaptation with orthogonal subspace learning in a single end-to-end training stage for continual learning in LLMs.
Experimental results show that OA-Adapter outperforms existing methods in accuracy and parameter efficiency, achieving higher accuracy using fewer parameters on standard benchmarks.