menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Little By ...
source image

Arxiv

3d

read

36

img
dot

Image Credit: Arxiv

Little By Little: Continual Learning via Self-Activated Sparse Mixture-of-Rank Adaptive Learning

  • Continual learning with large pre-trained models is challenging due to catastrophic forgetting and task interference.
  • A new approach called MoRA is proposed to address challenges like interference, redundancy, and ambiguity in existing Mixture-of-Experts (MoE) methods.
  • MoRA utilizes a Mixture-of-Rank Adaptive learning approach with self-activated and sparse rank activation to improve continual learning tasks with pre-trained models like CLIP and large language models (LLMs).
  • The proposed MoRA approach demonstrates effectiveness in enhancing continual learning with pre-trained models, improving generalization, and mitigating forgetting.

Read Full Article

like

2 Likes

For uninterrupted reading, download the app