Deep neural networks (DNNs) are susceptible to Universal Adversarial Perturbations (UAPs) that can deceive a target model across a wide range of samples.
In this paper, a novel data-free method called Intrinsic UAP (IntriUAP) is proposed to attack deep models without using any image samples.
The vulnerability of deep models is predominantly influenced by the linear components, which are leveraged in IntriUAP to achieve highly competitive performance in attacking popular image classification deep models.
The method also demonstrates strong black-box attack performance even with limited access to the victim model's layers.