A new method called Dual Prototype network for Task-wise Adaption (DPTA) is proposed for Class-Incremental Learning (CIL) using pre-trained models (PTM).
DPTA aims to address the challenge of catastrophic forgetting when fine-tuning PTMs on downstream incremental tasks by introducing adapter modules for each task to improve model adaption.
The DPTA method utilizes dual prototypes to enhance the prediction process by enabling test-time adapter selection and utilizing augmented prototypes to improve class separability.
Experiments on benchmark datasets have shown that DPTA outperforms existing methods in CIL, and the code for DPTA is available on GitHub for further exploration.