menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Is Paramet...
source image

Arxiv

13h

read

275

img
dot

Image Credit: Arxiv

Is Parameter Collision Hindering Continual Learning in LLMs?

  • Large Language Models (LLMs) often suffer from catastrophic forgetting when learning multiple tasks sequentially, making continual learning (CL) essential for their dynamic deployment.
  • Existing state-of-the-art (SOTA) methods focus on constructing orthogonality tasks to decouple parameter interdependence from various domains.
  • However, this paper suggests that building non-collision parameters is a more critical factor in addressing CL challenges.
  • The proposed approach, Non-collision Low-Rank Adaptation (N-LoRA), leverages low collision rates to enhance CL in LLMs with superior performance, higher task orthogonality, and lower parameter collision than SOTA methods.

Read Full Article

like

16 Likes

For uninterrupted reading, download the app