Continual Learning (CL) aims to enable neural networks to acquire new knowledge while retaining existing knowledge.Adapting PTMs before the core CL process (ACL) is a novel framework proposed to enhance plasticity in neural networks.ACL refines the PTM backbone through a plug-and-play adaptation phase before learning each new task, balancing stability and plasticity.Extensive experiments show that ACL improves CL performance across benchmarks and integrated methods, providing a versatile solution for PTM-based CL.