The quest for Continual Learning (CL) aims to enable neural networks to learn and adapt incrementally.
Addressing the stability-plasticity dilemma in CL involves balancing preserving prior knowledge with acquiring new knowledge.
A new framework called Dual-Arch is introduced to address the conflict between stability and plasticity at the architectural level.
Dual-Arch utilizes two independent networks, each specialized for plasticity and stability, leading to improved performance in CL methods with reduced parameters.