Continual fine-tuning of large language models (LLMs) suffers from catastrophic forgetting.The Sequential Ensemble of Experts (SEE) framework is introduced to address the challenges of continual fine-tuning.SEE allows each expert to independently decide whether a query should be handled, removing the need for an additional router.Experiments reveal that SEE outperforms prior approaches in continual fine-tuning and demonstrates remarkable generalization ability.