Recent supervised learning methods aimed at capturing aleatoric and epistemic uncertainty may overlook model bias.
A more detailed categorization of epistemic uncertainty sources reveals that current methods do not fully encompass all aspects of epistemic uncertainty.
Simulation-based assessments demonstrate that existing methods often underestimate epistemic uncertainty due to model bias, leading to inaccurate estimates.
Proper representation of all sources of epistemic uncertainty is critical for accurate aleatoric estimates in machine learning models.