Researchers have developed a technique to jailbreak multiple AI chatbots, including popular models like ChatGPT and Microsoft Copilot.
The technique, called 'Immersive World', involves creating a fictional scenario to bypass security controls and develop an effective infostealer malware.
This highlights the increased risk of cybercriminals with no prior experience in coding being able to create sophisticated malware.
The rise of AI-powered cyber threats is a serious concern, as it allows criminals to craft more sophisticated attacks with ease.