AI-assisted developer tools like GitLab's Duo chatbot can be tricked by malicious actors into performing hostile actions against users.Researchers demonstrated an attack that induced Duo to insert malicious code and leak private code and confidential data.The attack can be triggered by instructing the chatbot to interact with merge requests or content from outside sources.The vulnerability lies in prompt injections, which allow malicious actors to control AI assistants and exploit their eagerness to follow instructions.