Watermarking schemes are being considered as a way to differentiate between AI-generated content and human-created content.
These schemes involve embedding hidden signals within AI-generated content for reliable detection.
Although not a complete solution, watermarking can contribute significantly to AI safety and trustworthiness by combating misinformation and deception.
The paper provides an extensive overview of watermarking techniques for generative AI, starting with the necessity of watermarking from historical and regulatory standpoints.
The definitions and desired properties of watermarking schemes are formalized in the paper, along with an analysis of key objectives and threat models.
The study also delves into practical evaluation strategies to develop robust watermarking techniques that can withstand various attacks.
Recent works in this area are reviewed, open challenges are outlined, and potential future directions for watermarking in generative AI are discussed.
The aim of the paper is to guide researchers in improving watermarking methods and applications and aid policymakers in addressing the broader implications of generative AI.