Deep networks are prone to catastrophic forgetting during sequential task learning, leading to the loss of knowledge about old tasks.
Existing continual learning methods focus on protecting parameters of previous tasks, which can be impractical due to linear increase in parameter size.
A novel CL framework called learning without isolation (LwI) is introduced to protect pathways in the whole networks, rather than just individual parameters, inspired by neuroscience and physics.
LwI leverages model fusion through graph matching and adapts pathways for new tasks to address catastrophic forgetting in a parameter-efficient manner, as proven by experiments on benchmark datasets.