Deep neural networks used in safety-critical applications are often poorly calibrated and overconfident.Sharpness-aware minimization (SAM) can improve calibration by implicitly maximizing the entropy of the predictive distribution.A variant of SAM, CSAM, is proposed to further enhance model calibration.Experiments on datasets like ImageNet-1K show SAM's effectiveness in reducing calibration error, with CSAM outperforming SAM and other methods.