A new method called VAMO has been introduced for large-scale nonconvex optimization, balancing fast convergence with computational efficiency.
VAMO combines first-order mini-batch gradients with zeroth-order finite-difference probes under an SVRG-style framework.
The hybrid design of VAMO achieves a dimension-agnostic convergence rate, surpassing the slow convergence of purely zeroth-order methods and improving over SGD's rate.
Experiments show that VAMO outperforms established methods in various scenarios, offering a faster and more flexible option for improved efficiency in optimization tasks.