Image classification models trained on clean data often suffer from significant performance degradation when exposed to testing corrupted data, such as images with impulse noise, Gaussian noise, or environmental noise.
Robust learning algorithms like Sharpness-Aware Minimization (SAM) have shown promise in improving overall model robustness and generalization, but they fall short in addressing biased performance degradation across demographic subgroups.
FairSAM introduces a novel metric to assess performance degradation across subgroups under data corruption and integrates fairness-oriented strategies into SAM.
Experiments demonstrate that FairSAM reconciles robustness and fairness, offering a structured solution for equitable and resilient image classification in the presence of data corruption.