menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

A Unified ...
source image

Arxiv

2w

read

253

img
dot

Image Credit: Arxiv

A Unified Gradient-based Framework for Task-agnostic Continual Learning-Unlearning

  • Recent advancements in deep models have emphasized the importance of combining continual learning (CL) for knowledge acquisition with machine unlearning (MU) for data removal, which led to the development of the Continual Learning-Unlearning (CLU) paradigm.
  • A unified optimization framework based on Kullback-Leibler divergence minimization that connects CL and MU processes has been introduced. This framework breaks down gradient updates for approximate CLU into four components: learning new knowledge, unlearning targeted data, preserving existing knowledge, and modulation via weight saliency.
  • To address the challenge of balancing knowledge update and retention during sequential learning-unlearning cycles, a remain-preserved manifold constraint has been introduced to induce remaining Hessian compensation for CLU iterations.
  • Experiments show that the proposed UG-CLU framework effectively manages incremental learning, precise unlearning, and knowledge stability across various datasets and model architectures, supporting task-agnostic CLU scenarios that enable fine-grained unlearning at different levels.

Read Full Article

like

15 Likes

For uninterrupted reading, download the app