Anomaly Detection (AD) and Anomaly Localization (AL) are critical in high-reliability fields like medical imaging and industrial monitoring.
Current AD and AL methods are vulnerable to adversarial attacks due to limited training data consisting mainly of normal, unlabeled samples.
PatchGuard is introduced as an adversarially robust AD and AL technique that incorporates pseudo anomalies and localization masks within a Vision Transformer (ViT) architecture to address these vulnerabilities.
The study explores the essential features of pseudo anomalies and provides theoretical insights into attention mechanisms required to enhance the adversarial robustness of AD and AL systems.
The approach leverages Foreground-Aware Pseudo-Anomalies to improve anomaly-aware methods and integrates them into a ViT-based framework.
Adversarial training is guided by a novel loss function aimed at enhancing model robustness, as supported by theoretical analysis.
Experimental results on established industrial and medical datasets show that PatchGuard surpasses previous methods in adversarial scenarios with significant performance gains of 53.2% in AD and 68.5% in AL, while maintaining competitive accuracy in non-adversarial settings.
The code repository for PatchGuard is available at https://github.com/rohban-lab/PatchGuard