Continual learning (CL) is a technique for maintaining a small forgetting loss on previously-learned tasks in an online learning setup.
Existing work focuses on reducing forgetting loss under a given task sequence, but fails to address the issue of huge forgetting loss on prior distinct tasks if similar tasks continuously appear.
In IoT networks, where an autonomous vehicle samples data and learns different tasks, the order of task patterns can be altered at an increased travelling cost.
Researchers have formulated a new optimization problem to study how to opportunistically route the testing object and alter the task sequence in CL, achieving close-to-optimum performance.