menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Gradient-f...
source image

Arxiv

1d

read

181

img
dot

Image Credit: Arxiv

Gradient-free Continual Learning

  • Continual learning (CL) is a challenge in training neural networks on sequential tasks without catastrophic forgetting.
  • Traditional CL approaches rely on gradient-based optimization using stochastic gradient descent (SGD) or its variants.
  • The limitation of gradient-based CL arises when previous data is not available, resulting in uncontrolled parameter changes and significant forgetting of previously learned tasks.
  • This work explores the use of gradient-free optimization methods as a robust alternative to address forgetting in CL.

Read Full Article

like

10 Likes

For uninterrupted reading, download the app