menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Boosting I...
source image

Arxiv

2d

read

110

img
dot

Image Credit: Arxiv

Boosting In-Context Learning in LLMs Through the Lens of Classical Supervised Learning

  • In-Context Learning (ICL) in Large Language Models (LLMs) allows adaptation to new tasks with few examples but suffers from systematic biases affecting classification performance.
  • Existing calibration techniques shift decision boundaries in the logit space without altering orientation, leading to limited effectiveness.
  • Supervised Calibration (SC) is proposed as a loss-minimization framework that optimizes LLM's predictive probabilities through per-class affine transformations in the logit space, overcoming limitations of existing methods.
  • SC integrates purpose-built regularization techniques for stability and control, achieving superior performance in different shot settings across multiple datasets.

Read Full Article

like

6 Likes

For uninterrupted reading, download the app