Existing deep generative models (DGMs) like VAEs and GANs face challenges in handling discrete observations and latent codes.
Joint-stochastic-approximation (JSA) autoencoders have been introduced to address these issues by directly maximizing data log-likelihood and minimizing KL divergence for semi-supervised learning.
The JSA algorithm shows superior performance in handling structure mismatch, discrete and continuous variables, and achieves comparable results to state-of-the-art DGMs in semi-supervised tasks on datasets like MNIST and SVHN.
This marks the first successful application of discrete latent variable models in challenging semi-supervised tasks.