Deep neural networks (DNNs) trained through end-to-end learning have achieved remarkable success across diverse machine learning tasks, but they are not designed to adhere to the Minimum Description Length (MDL) principle.
A novel theoretical framework for designing and evaluating deep Variational Autoencoders (VAEs) based on MDL is introduced.
The Spectrum VAE, a specific VAE architecture, is designed and its MDL can be rigorously evaluated under given conditions.
This work lays the foundation for future research on designing deep learning systems that explicitly adhere to information-theoretic principles.