menu
techminis

A naukri.com initiative

google-web-stories
Home

>

Robotics News

>

‘Protected...
source image

Unite

2d

read

134

img
dot

Image Credit: Unite

‘Protected’ Images Are Easier, Not More Difficult, to Steal With AI

  • Research suggests that AI editing tools struggle with watermarking meant to block manipulations, sometimes making it easier for AI to make unauthorized changes.
  • Systems aim to protect copyrighted images from being used in AI processes like Latent Diffusion Models, but some protections may backfire.
  • Adversarial noise can cause image detectors to guess content incorrectly and hinder image-generating systems from exploiting copyrighted data.
  • Protection methods may unintentionally facilitate AI in following editing prompts closely, resulting in better edits.
  • Methods like Mist and Glaze aim to prevent unauthorized use of copyrighted styles in AI training but may not provide sufficient protection.
  • New research suggests that adding perturbations to images may paradoxically enhance AI's association with text prompts, leading to unintended better edits.
  • Tests using protection methods like PhotoGuard, Mist, and Glaze show that protections do not completely block AI editing and may improve exploitability.
  • Protection methods which add noise to images may make it easier for AI to reshape images to match prompts, contrary to their intended purpose of safeguarding against manipulations.
  • The study highlights limitations of adversarial perturbations for image protection and emphasizes the need for more effective techniques.
  • Protection methods may unintentionally bolster AI's responsiveness to prompts, allowing for closer alignment with objectives and raising concerns about unauthorized copying.
  • Search for copyright protection via adversarial perturbation faces challenges, and alternative solutions like third-party monitoring frameworks may need to be considered.

Read Full Article

like

8 Likes

For uninterrupted reading, download the app