Continual learning techniques employ simple replay sample selection processes and use them during subsequent tasks.
This paper proposes a label-free replay buffer and introduces cluster preservation loss in order to maintain essential information from previously encountered tasks while adapting to new tasks.
The method includes 'push-away' and 'pull-toward' mechanisms to retain previously learned information and facilitate adaptation to new classes or domain shifts.
Experimental results on various benchmarks show that the label-free replay-based technique outperforms state-of-the-art continual learning methods and even surpasses offline learning in some cases.