Continual learning aims to learn multiple tasks sequentially.
Pareto Continual Learning (ParetoCL) is a novel framework for balancing the stability and plasticity trade-off in continual learning.
ParetoCL formulates the trade-off as a multi-objective optimization problem and introduces a preference-conditioned model to dynamically adapt during inference.
Extensive experiments show that ParetoCL outperforms state-of-the-art methods in diverse continual learning scenarios.