Overfitting is a common issue in Machine Learning models, even among state-of-the-art models, leading to reduced generalization and a significant performance gap between training and testing sets.
To address overfitting, various techniques like dropout, data augmentation, and weight decay are used for regularization.
A new data augmentation method called Relevance-driven Input Dropout (RelDrop) is proposed, which selectively occludes the most relevant regions of the input to improve model generalization through informed regularization.
Experiments on benchmark datasets show that RelDrop enhances robustness towards occlusion, encourages models to use more features in prediction, and improves generalization performance during inference.