Continuous improvements in image compression with variational autoencoders have led to competitive learned codecs.
Quantization during the training process poses challenges due to zero derivatives, requiring differentiable approximations for optimization.
Proposed method involves retraining parts of the network on quantized latents post end-to-end training for improved accuracy in modeling quantization noise.
Results show additional coding gain for both uniform scalar and entropy-constraint quantization without increasing complexity, with average savings up to 2.2% in bitrate.