Continual Learning (CL) is a method that enables agents to learn a sequence of tasks and use past knowledge for future learning.
Existing CL methods assume static agent capabilities in dynamic environments, not reflecting real-world scenarios.
A new problem, Continual Learning with Dynamic Capabilities (CL-DC), challenges agents to generalize policies across different action spaces.
The proposed Action-Adaptive Continual Learning framework (AACL) addresses this challenge by decoupling the agent's policy from specific action spaces and adapting action representations for new spaces, outperforming existing methods.