Selective classification enhances predictive model reliability by allowing abstaining from uncertain predictions.
The study focuses on optimal selection functions using the Neyman--Pearson lemma, which characterizes the optimal rejection rule as a likelihood ratio test.
New approaches to selective classification are proposed based on the Neyman--Pearson lemma, unifying post-hoc selection baselines' behaviors.
The study evaluates the proposed methods in covariate shift scenarios across various vision and language tasks, showing consistent outperformance of existing baselines.