Hackers are using artificial intelligence to attack AI models using a technique called Fun-Tuning.Fun-Tuning makes prompt injection attacks more effective and achieved up to 82% success rates on Google's Gemini models.These attacks exploit subtle clues in the fine-tuning process to increase the chances of successful prompt injections.Defending against these attacks is challenging and removing key data from training would make the tool less useful for developers.