menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Sparse Int...
source image

Arxiv

1d

read

370

img
dot

Image Credit: Arxiv

Sparse Interpretable Deep Learning with LIES Networks for Symbolic Regression

  • Symbolic regression (SR) aims to discover closed-form mathematical expressions that accurately describe data, offering interpretability and analytical insight beyond black-box models.
  • Introducing LIES (Logarithm, Identity, Exponential, Sine), a fixed neural network architecture with interpretable primitive activations optimized to model symbolic expressions.
  • The framework extracts compact formulae from LIES networks by training with oversampling strategy and a tailored loss function to promote sparsity and prevent gradient instability.
  • Experiments on SR benchmarks show that the LIES framework consistently produces sparse and accurate symbolic formulae outperforming all baselines, with the importance of each design component demonstrated through ablation studies.

Read Full Article

like

22 Likes

For uninterrupted reading, download the app