Researchers propose a novel technique to enhance out-of-distribution (OOD) detection in pre-trained deep learning networks without altering their original parameters.
The approach defines probabilistic trust intervals for each network weight using in-distribution data and samples additional weight values during inference.
By quantifying the disagreements among outputs, the method achieves improved OOD detection performance compared to various baseline methods.
The proposed approach demonstrates robustness in identifying corrupted and adversarial inputs, without requiring OOD samples during training.