Unsupervised representation learning is essential for applications like Neural Architecture Search (NAS).
Variational Autoencoders (VAEs) often result in a high percentage of invalid or duplicate architectures when sampling from the continuous representation space.
A Vector Quantized Variational Autoencoder (VQ-VAE) is introduced to learn a discrete latent space for neural architectures.
The VQ-VAE approach significantly improves the generation of valid and unique architectures in NAS.