Symbolic regression (SR) aims to discover closed-form mathematical expressions that accurately describe data, offering interpretability and analytical insight beyond black-box models.
Introducing LIES (Logarithm, Identity, Exponential, Sine), a fixed neural network architecture with interpretable primitive activations optimized to model symbolic expressions.
The framework extracts compact formulae from LIES networks by training with oversampling strategy and a tailored loss function to promote sparsity and prevent gradient instability.
Experiments on SR benchmarks show that the LIES framework consistently produces sparse and accurate symbolic formulae outperforming all baselines, with the importance of each design component demonstrated through ablation studies.