Recent advancements in Artificial Intelligence have led to the development of Multimodal Large Language Models (MLLMs).
Fine-tuning MLLMs for specific tasks often causes performance degradation in the model's prior knowledge domain, known as 'Catastrophic Forgetting'.
This review paper provides an overview and analysis of 440 research papers in the field of MLLM continual learning.
The paper discusses the challenges and future directions of continual learning in MLLMs, aiming to inspire future research and development in the field.