In-Context Learning (ICL) in Large Language Models (LLMs) allows adaptation to new tasks with few examples but suffers from systematic biases affecting classification performance.
Existing calibration techniques shift decision boundaries in the logit space without altering orientation, leading to limited effectiveness.
Supervised Calibration (SC) is proposed as a loss-minimization framework that optimizes LLM's predictive probabilities through per-class affine transformations in the logit space, overcoming limitations of existing methods.
SC integrates purpose-built regularization techniques for stability and control, achieving superior performance in different shot settings across multiple datasets.