menu
techminis

A naukri.com initiative

google-web-stories
Home

>

Robotics News

>

A Forensic...
source image

Unite

1M

read

9

img
dot

Image Credit: Unite

A Forensic Data Method for a New Generation of Deepfakes

  • Deepfake attacks can harm individuals by falsely attributing them to certain content, even without explicit identification.
  • Current methods like Low-Rank Adaptation (LoRA) are commonly used for disseminating identity-focused models, often targeting female celebrities.
  • Distinct individuals face inevitable association with AI-generated content due to face recognition systems linking them automatically.
  • A new paper from Denmark proposes a method to identify source images in a black-box Membership Inference Attack (MIA).
  • The research focuses on exposing source data by generating 'deepfakes' and finding signs of memorization in AI-generated images.
  • Forensic analysis can reveal if AI models were fine-tuned on specific faces, aiding in proving intent and potential copyright infringement.
  • Detection of source images is more effective in extensively fine-tuned AI models, especially in cases of LoRAs targeting individuals.
  • Visible watermarks in training images enhance detection accuracy, while hidden watermarks offer minimal advantage.
  • Membership inference attacks, tested on faces and other objects fine-tuned models, show effectiveness in determining dataset usage.
  • The method's computationally intensive nature hints at the need for additional research to enhance practical applications in deepfake forensic investigations.

Read Full Article

like

Like

For uninterrupted reading, download the app