Machine learning algorithms in high-dimensional settings are highly susceptible to the influence of even a small fraction of structured outliers, making robust optimization techniques essential.
New novel algorithms have been developed that achieve minimax-optimal excess risk (up to logarithmic factors) under the epsilon-contamination model.
These algorithms do not require stringent assumptions, such as Lipschitz continuity and smoothness of individual sample functions, improving over existing suboptimal algorithms.
The developed algorithms can also handle the case of unknown covariance parameter and can be extended to nonsmooth population risks via convolutional smoothing.