Adversarial attacks pose a critical vulnerability in machine learning models by tricking them using nearly invisible perturbations to images.
Existing defensive mechanisms for mitigating adversarial attacks often require significant time and computational costs.
This study presents an improved model that incorporates residual blocks to enhance the generalizability and transferability of the defense method.
Experimental results demonstrate that the proposed model can restore classification accuracy while maintaining competitive performance compared to state-of-the-art methods.