Continual learning in deep neural networks often suffers from catastrophic forgetting, where representations for previous tasks are overwritten during subsequent training.
A novel sample retrieval strategy is proposed that leverages both gradient-conflicting and gradient-aligned samples to retain knowledge about past tasks.
Gradient-conflicting samples are selected to reduce interference and re-align gradients, preserving past task knowledge.
Experiments validate the method's state-of-the-art performance in mitigating forgetting and maintaining competitive accuracy on new tasks.