Variational Autoencoders (VAEs) are a popular framework for unsupervised learning and data generation.
A study focused on the Soft-IntroVAE (S-IntroVAE) and investigated the implication of incorporating a multimodal and learnable prior into the framework.
The study formulated the prior as a third player and showed that it is an effective way for prior learning, sharing the Nash Equilibrium with the vanilla S-IntroVAE.
Experiments demonstrated the benefit of prior learning in S-IntroVAE in generation and representation learning on benchmark datasets.