A novel framework has been proposed for aligning learnable latent spaces with arbitrary target distributions by using flow-based generative models as priors.
The method involves pretraining a flow model on target features to capture the distribution, which then regularizes the latent space through an alignment loss.
Minimizing this alignment loss establishes a computationally tractable surrogate objective for maximizing a variational lower bound on the log-likelihood of latents under the target distribution.
The proposed method aims to simplify the process by eliminating expensive likelihood evaluations and avoiding ODE solving during optimization, demonstrating its effectiveness through image generation experiments on ImageNet.