Uncertainty quantification is crucial in deep learning for safe and reliable decision-making.A new method based on variation due to regularization is proposed for uncertainty quantification in large networks.The method adjusts the training loss during fine-tuning and reflects confidence in the output based on all layers of the network.Experiments show that the proposed method provides rigorous uncertainty estimates and improves uncertainty quantification quality.