This tutorial focuses on the architectures of Variational Autoencoders (VAE) and Generative Adversarial Networks (GAN).Both VAE and GAN utilize simple distributions, such as Gaussians, as a basis.They leverage the nonlinear transformation capabilities of neural networks to approximate complex data distributions.The choice of a simple latent prior introduces limitations.