Reliable uncertainty estimation is crucial for using neural networks in real-world applications.
A new approach called CLUE (Calibration via Learning Uncertainty-Error Alignment) has been introduced to align predicted uncertainty with observed error during training.
CLUE uses a novel loss function to optimize predictive performance and calibration, making it fully differentiable, domain-agnostic, and compatible with standard training pipelines.
Extensive experiments demonstrate that CLUE achieves superior calibration quality and competitive predictive performance across various tasks and scenarios.