Researchers at North Carolina State University have developed a new method called RisingAttacK, which subtly alters visual input to deceive AI models by targeting specific features within an image.
The attack, which is undetectable to humans, can manipulate what AI sees without changing the image's appearance, potentially causing it to misidentify objects in critical systems like self-driving cars.
RisingAttacK impacts widely used vision architectures, such as ResNet-50, DenseNet-121, ViTB, and DEiT-B, successfully fooling them by influencing their recognition of common objects like cars, bicycles, pedestrians, and stop signs.
While the focus is currently on computer vision systems, the researchers are exploring broader implications and aiming to develop defensive techniques to protect against such attacks as the importance of digital safeguards for AI systems grows.