Prompt injection is an attack where someone manipulates an AI system designed to follow instructions (or prompts).
An attacker can make the system ignore the original instructions, generate incorrect responses, or compromise system security.
Prompt injection can affect any application using generative AI, such as chatbots, productivity tools, and coding assistants.
To protect against prompt injection, implement external validations, separate operational context from user context, and monitor and log manipulation attempts.