<ul data-eligibleForWebStory="false">New research highlights that AI often provides incorrect URLs, posing risks to users like phishing and malware attacks.One in three login links provided by large language models (LLMs) such as GPT-4.1 were incorrect, with some leading to unsafe domains.Attackers are optimizing websites for LLMs, creating phishing risks, with instances of fake sites being recommended by AI tools observed.Developers have unknowingly used malicious AI-generated URLs in code, emphasizing the need for users to verify web addresses before clicking on links.