This thesis focuses on the reliability and safety of machine learning models in open-world deployment by addressing distributional uncertainty and unknown classes.
Novel frameworks are introduced to optimize for in-distribution accuracy and reliability to unseen data, including an unknown-aware learning framework.
Outlier synthesis methods like VOS, NPOS, and DREAM-OOD are proposed to generate informative unknowns during training, enhancing OOD detection using unlabeled data.
The thesis extends reliable learning to foundation models through tools like HaloScope, MLLMGuard, and data cleaning methods, aiming to improve the safety of large-scale models in deployment.