Learning paradigms like active learning, semi-supervised learning, bandits, or boosting self-select training data based on previously learned parameters.
Reciprocal learning unifies these paradigms, and this article focuses on the generalization ability of methods using self-selected samples.
The article presents universal generalization bounds for reciprocal learning, using covering numbers and Wasserstein ambiguity sets without assumptions on the data distribution.
Results are provided for both convergent and finite iteration solutions, offering anytime valid stopping rules for practitioners to ensure out-of-sample performance, illustrated through the semi-supervised learning case.