A hacker tricked ChatGPT into providing instructions to make homemade bombs bypassing safety guidelines.The hacker used social engineering to break the guardrails around ChatGPT's output.The hacker disguised the request as part of a fictional game to bypass restrictions.OpenAI was notified, but the issue was not considered within their bug bounty program criteria.