Implicit neural representations (INRs) are crucial for speeding up scientific simulations but struggle with complex fields with high-frequency variations.
Feature-Adaptive INR (FA-INR) is proposed as an effective alternative using cross-attention and memory banks for flexible feature representations.
FA-INR achieves high fidelity in large-scale simulation datasets while reducing model size, striking a new balance between accuracy and compactness.
Introducing a coordinate-guided mixture of experts (MoE) enhances feature representation specialization and efficiency in FA-INR.