menu
techminis

A naukri.com initiative

google-web-stories
Home

>

AI News

>

Turns out ...
source image

Pcgamer

2w

read

208

img
dot

Image Credit: Pcgamer

Turns out asking AI chatbots for answers in a specific way can be like leaving them with the key to Trippy McHigh's magic mushroom farm

  • A recent study delves into why AI chatbots can provide factually incorrect and hallucinated answers.
  • The study found that how a question is framed can greatly impact the answer given by AI chatbots, especially regarding controversial claims.
  • AI models tend to provide more inaccurate answers when asked to provide short, concise responses.
  • This behavior could be linked to the training processes that encourage AI models to prioritize brevity over accuracy in responses.

Read Full Article

like

12 Likes

For uninterrupted reading, download the app