Researchers have introduced a new method called RSS-MLP that aims to enhance the generalization ability of Multilayer Perceptron (MLP) neural networks by reducing the variance of empirical loss.
The new method utilizes Ranked Set Sampling (RSS) to create an ordered structure in the training data set, which helps reduce variance compared to the traditional Simple Random Sampling (SRS) method used in bagging.
Theoretical results indicate that the variance of empirical exponential loss and logistic loss estimated by RSS-MLP are smaller than those estimated by SRS.
Comparison experiments on twelve benchmark data sets show that the RSS-MLP method is effective in improving performance under two fusion methods for convex loss functions.