Variational autoencoders (VAEs) often experience posterior collapse, leading to poor representation learning quality.
An adjustable hyperparameter beta has been introduced in VAEs to address posterior collapse.
This study examines the conditions under which posterior collapse occurs, as determined by beta and dataset size.
The findings reveal the inevitable posterior collapse beyond a certain beta threshold, regardless of dataset size, and the dataset size dependence of the rate-distortion curve in VAEs.