LLM Flowbreaking is identified as the third attack type against LLMs after jailbreaking and prompt injection.The 'Second Thoughts' attack involves the LLM retracting and replacing offensive content or displaying an error message.By pressing the Stop button while the LLM is generating an answer, users can bypass guardrails and receive a violating response.The attack focuses on exploiting the application architecture components surrounding the LLM, rather than the model itself.