Pretext Invariant Representation Learning (PIRL) followed by Supervised Fine-Tuning (SFT) has become a standard paradigm for learning with limited labels.
The Positive Unlabeled (PU) setting involves a small set of labeled positives and a large unlabeled pool containing both positives and negatives.
The Positive Unlabeled Contrastive Learning (puCL) objective integrates weak supervision from labeled positives into the contrastive loss, without access to the class prior.
When the class prior is known, Positive Unlabeled InfoNCE (puNCE) re-weights unlabeled samples as soft positive negative mixtures for better representation learning.