Large Language Models (LLMs) often suffer from catastrophic forgetting when learning multiple tasks sequentially, making continual learning (CL) essential for their dynamic deployment.
Existing state-of-the-art (SOTA) methods focus on constructing orthogonality tasks to decouple parameter interdependence from various domains.
However, this paper suggests that building non-collision parameters is a more critical factor in addressing CL challenges.
The proposed approach, Non-collision Low-Rank Adaptation (N-LoRA), leverages low collision rates to enhance CL in LLMs with superior performance, higher task orthogonality, and lower parameter collision than SOTA methods.