Recent work in continual learning has shown the benefits of weight resampling in the last layer of a neural network, known as 'zapping'.
Researchers investigated learning and forgetting patterns within convolutional neural networks during training under challenging scenarios like continual learning and few-shot transfer learning.
Experiments demonstrated that models trained with 'zapping' recover faster when transitioning to new domains.
The study also highlighted how the choice of optimizer affects the dynamics of learning and forgetting, leading to complex patterns of synergy or interference between tasks during sequential learning.