menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Budget-Ada...
source image

Arxiv

1w

read

215

img
dot

Image Credit: Arxiv

Budget-Adaptive Adapter Tuning in Orthogonal Subspaces for Continual Learning in LLMs

  • Large language models (LLMs) face issues of catastrophic forgetting in continual learning scenarios, where past learning is degraded while training on new tasks.
  • A new approach called OA-Adapter is proposed to address the limitations of fixed budget allocation and decoupled optimization and budget allocation methods in LLMs.
  • OA-Adapter unifies dynamic budget adaptation with orthogonal subspace learning in a single end-to-end training stage for continual learning in LLMs.
  • Experimental results show that OA-Adapter outperforms existing methods in accuracy and parameter efficiency, achieving higher accuracy using fewer parameters on standard benchmarks.

Read Full Article

like

12 Likes

For uninterrupted reading, download the app