This paper investigates the robustness of an f-divergence-based class of objective functions, referred to as f-PML, to label noise in supervised classification.
The study shows that, in the presence of label noise, the f-PML objective functions can be corrected to obtain a neural network that matches the clean dataset.
An alternative correction approach is proposed to refine the posterior estimation during the test phase for neural networks trained with label noise.
The paper demonstrates that the f-PML objective functions are robust to symmetric label noise and can achieve competitive performance with refined training strategies.