Generative classifiers are typically learned using closed-form procedures that maximize data fitting scores, but are not directly linked to supervised classification metrics.
To address this limitation, a learning procedure called risk-based calibration (RC) is proposed, which adjusts the joint probability distribution of the classifier according to the 0-1 loss in training samples.
RC reinforces data statistics associated with true classes and weakens those of incorrect classes, progressively improving the classifier's training error.
Experimental results on 20 datasets show that RC outperforms closed-form learning procedures in terms of training error and generalization error, bridging the gap between traditional generative approaches and performance-guided learning procedures.