A new method known as Variational Contrastive Learning (VCL) has been proposed to address uncertainty quantification in contrastive learning methods like SimCLR and SupCon.
VCL is a decoder-free framework that utilizes the evidence lower bound (ELBO), treating the InfoNCE loss as a reconstruction term and introducing a KL divergence regularizer with a uniform prior.
The approximate posterior $q_ heta(z|x)$ is modeled as a projected normal distribution, allowing for sampling of probabilistic embeddings.
Two implementations of VCL, VSimCLR and VSupCon, involve using samples from $q_ heta(z|x)$ instead of deterministic embeddings and integrating a normalized KL term into the loss.
Experiments on various benchmarks demonstrate that VCL addresses dimensional collapse, improves mutual information with class labels, and either matches or surpasses deterministic methods in classification accuracy.
VCL also provides valuable uncertainty estimates through the posterior model, enhancing the probabilistic foundation of contrastive learning.
Overall, VCL introduces a probabilistic perspective to contrastive learning, offering a new approach for these methods.