menu
techminis

A naukri.com initiative

google-web-stories
Home

>

Technology News

>

Hackers co...
source image

Tech Radar

2d

read

58

img
dot

Image Credit: Tech Radar

Hackers could one day use novel visual techniques to manipulate what AI sees - RisingAttacK impacts 'most widely used AI computer vision systems'

  • Researchers at North Carolina State University have developed a new method called RisingAttacK, which subtly alters visual input to deceive AI models by targeting specific features within an image.
  • The attack, which is undetectable to humans, can manipulate what AI sees without changing the image's appearance, potentially causing it to misidentify objects in critical systems like self-driving cars.
  • RisingAttacK impacts widely used vision architectures, such as ResNet-50, DenseNet-121, ViTB, and DEiT-B, successfully fooling them by influencing their recognition of common objects like cars, bicycles, pedestrians, and stop signs.
  • While the focus is currently on computer vision systems, the researchers are exploring broader implications and aiming to develop defensive techniques to protect against such attacks as the importance of digital safeguards for AI systems grows.

Read Full Article

like

3 Likes

For uninterrupted reading, download the app