menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

VAMO: Effi...
source image

Arxiv

1w

read

80

img
dot

Image Credit: Arxiv

VAMO: Efficient Large-Scale Nonconvex Optimization via Adaptive Zeroth Order Variance Reduction

  • A new method called VAMO has been introduced for large-scale nonconvex optimization, balancing fast convergence with computational efficiency.
  • VAMO combines first-order mini-batch gradients with zeroth-order finite-difference probes under an SVRG-style framework.
  • The hybrid design of VAMO achieves a dimension-agnostic convergence rate, surpassing the slow convergence of purely zeroth-order methods and improving over SGD's rate.
  • Experiments show that VAMO outperforms established methods in various scenarios, offering a faster and more flexible option for improved efficiency in optimization tasks.

Read Full Article

like

4 Likes

For uninterrupted reading, download the app