menu
techminis

A naukri.com initiative

google-web-stories
Home

>

Technology News

>

Not even f...
source image

Tech Radar

4w

read

432

img
dot

Image Credit: Tech Radar

Not even fairy tales are safe - researchers weaponise bedtime stories to jailbreak AI chatbots and create malware

  • Researchers have developed a technique to jailbreak multiple AI chatbots, including popular models like ChatGPT and Microsoft Copilot.
  • The technique, called 'Immersive World', involves creating a fictional scenario to bypass security controls and develop an effective infostealer malware.
  • This highlights the increased risk of cybercriminals with no prior experience in coding being able to create sophisticated malware.
  • The rise of AI-powered cyber threats is a serious concern, as it allows criminals to craft more sophisticated attacks with ease.

Read Full Article

like

26 Likes

For uninterrupted reading, download the app