A bug in Meta's AI chatbot may have exposed users' conversations to other users, allowing them to see private prompts and AI-generated responses.
The bug, discovered by Sandeep Hodkasia, has now been fixed after being live for several weeks. Meta has paid a bug bounty reward for reporting the issue.
The vulnerability stemmed from how Meta AI handled prompt editing, where unique numbers assigned to prompts were easily guessable, enabling unauthorized access to others' interactions.
Despite Meta stating no misuse was detected, the incident raises concerns about privacy in AI tools. This isn't the first time Meta AI has faced privacy issues, as seen with the accidental sharing of private chats earlier this year.