The researchers propose a novel approach called hyper-compression for model compression.Hyper-compression represents the parameters of the target network using dynamic systems as hyperfunctions.This approach offers a preferable compression ratio, no post-hoc retraining, affordable inference time, and short compression time.The hyper-compression method achieves close-to-int4-quantization performance with less than 1% performance drop.