Conformal prediction is a framework for uncertainty quantification that constructs prediction sets with coverage guarantees, often requiring a holdout set for parameter tuning.
Empirical findings suggest that the tuning bias, resulting from using the same dataset for tuning and calibration, is minimal for simple parameter tuning in many conformal prediction methods.
A scaling law for tuning bias is observed, showing that bias increases with parameter space complexity but decreases with calibration set size.
The study establishes a theoretical framework to quantify tuning bias, provides a proof for the scaling law, and discusses strategies to mitigate tuning bias based on the research findings.