The ChatGPT search tool, developed by OpenAI, has potential security issues that could allow the tool to be manipulated and return malicious code from websites.
The tool is available to paying customers and is being touted as a default search tool.
The Guardian tested how ChatGPT responded when asked to summarize hidden content on webpages and found examples of ‘prompt injection’.
Prompts are simple pieces of concealed text that can contain instructions that alter the tool’s responses, for example by skewing reviews.
One test included fake reviews to influence an overwhelmingly positive summary of the product.
The ChatGPT tool was also tested by a cybersecurity researcher who believes that people could create websites to deceive users if the tool is released without proper testing.
OpenAI warns users to check important information due to the possibility of errors in ChatGPT’s output.
Karsten Nohl, the chief scientist at cybersecurity firm SR Labs, said AI chat services should be used more like a “co-pilot” and that outputs should not be used completely unfiltered.
A key question raised is how these vulnerabilities could affect website practices and risk to users if combining search and large language models becomes more widespread.
Hidden text has historically been penalized by search engines, like Google, but the practice may not deter websites using it to try to maintain a good rank in search engines.