Existing continual learning (CL) methods rely on fine-tuning or adapting large language models (LLMs) but suffer from catastrophic forgetting (CF).
In-context learning (ICL) can leverage the extensive knowledge within LLMs for CL without updating any parameters.
However, scaling ICL becomes challenging as the prompt length increases and exceeds the input token limit.
To address this, the InCA approach integrates an external continual learner (ECL) with ICL, resulting in scalable CL without CF and achieving significant performance gains.