Policy Puppetry is a prompt injection technique that bypasses safety features in major AI models.Attackers can create dangerous content by exploiting this technique and bypassing safety measures.Popular AI models affected by Policy Puppetry Prompt Injection (PPPI) include ChatGPT, Gemini, DeepSeek, Copilot, and others.The attack leverages a flaw in the way AI models manage data regarding policies and poses significant security concerns.