menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Robust Hal...
source image

Arxiv

1w

read

33

img
dot

Image Credit: Arxiv

Robust Hallucination Detection in LLMs via Adaptive Token Selection

  • Recent research in hallucination detection in large language models (LLMs) has shown that LLMs' internal representations contain truthfulness hints that can be used for detector training.
  • However, the performance of these detectors is heavily dependent on predetermined tokens and fluctuates when working on free-form generations with varying lengths and sparse distributions of hallucinated entities.
  • To address this, a novel approach called HaMI is proposed, which enables robust detection of hallucinations through adaptive selection and learning of critical tokens that are most indicative of hallucinations.
  • Experimental results on four hallucination benchmarks demonstrate that HaMI outperforms existing state-of-the-art approaches.

Read Full Article

like

2 Likes

For uninterrupted reading, download the app