This work examines the robustness of different first-order optimization methods in dealing with relative inexactness in gradient computations.
Three major families of methods analyzed are constant step gradient descent, long-step methods, and accelerated methods.
Long-step and accelerated methods are theoretically shown to be not robust to inexactness initially.
A semi-heuristic shortening factor is introduced to enhance the theoretical guarantees of long-step and accelerated methods.
All methods are tested on an inexact problem, showing that accelerated methods are more robust than expected and the shortening factor helps long-step methods significantly.
The study concludes that all shortened methods appear promising, even in an inexact setting.