Flashback Learning (FL) is a new method designed to balance stability and plasticity in Continual Learning (CL).
FL differs from previous approaches by bidirectionally regularizing model updates to incorporate new knowledge while retaining old knowledge.
It operates through a two-phase training process and can be integrated into various CL methods, showing improvements over baseline methods.
Empirical results demonstrate up to 4.91% accuracy improvement in Class-Incremental and 3.51% in Task-Incremental settings, surpassing state-of-the-art methods on datasets like ImageNet.