Generative networks have shown success in learning complex data distributions, but their theoretical foundation is unclear.
Previous theory suggested that the latent dimension needs to be at least equal to the intrinsic dimension of the data manifold to approximate its distribution.
However, a new study challenges this requirement by demonstrating that generative networks can approximate distributions on lower-dimensional manifolds from inputs of any dimension.
This finding implies a trade-off between approximation error, dimensionality, and model complexity in generative networks.