Contrastive learning has been effective in shaping representation spaces by pulling similar samples together and pushing dissimilar ones apart.
Variational Supervised Contrastive Learning (VarCon) addresses limitations by reformulating supervised contrastive learning as variational inference over latent class variables.
VarCon maximizes a posterior-weighted evidence lower bound for efficient class-aware matching and control over intra-class dispersion in the embedding space.
Experiments show that VarCon achieves state-of-the-art performance in contrastive learning, clearer decision boundaries, semantic organization, and superior performance in few-shot learning and robustness across various augmentation strategies.