Implicit Neural Representations (INRs) are versatile for representing discretized signals and offer benefits such as infinite query resolution and reduced storage requirements.
A new compression algorithm called SINR is introduced, which uses high-dimensional sparse code within a dictionary to compress the vector spaces formed by weights of INRs.
The atoms of the dictionary used to generate the sparse code do not need to be learned or transmitted, resulting in substantial reductions in storage requirements for INRs.
SINR outperforms conventional INR-based compression techniques and maintains high-quality decoding across various data modalities.