menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Understand...
source image

Arxiv

1w

read

176

img
dot

Image Credit: Arxiv

Understanding Contrastive Representation Learning from Positive Unlabeled (PU) Data

  • Pretext Invariant Representation Learning (PIRL) followed by Supervised Fine-Tuning (SFT) has become a standard paradigm for learning with limited labels.
  • The Positive Unlabeled (PU) setting involves a small set of labeled positives and a large unlabeled pool containing both positives and negatives.
  • The Positive Unlabeled Contrastive Learning (puCL) objective integrates weak supervision from labeled positives into the contrastive loss, without access to the class prior.
  • When the class prior is known, Positive Unlabeled InfoNCE (puNCE) re-weights unlabeled samples as soft positive negative mixtures for better representation learning.

Read Full Article

like

10 Likes

For uninterrupted reading, download the app