Neural networks trained on large data sets lack the simplicity of traditional scientific models, leading to a growing emphasis on interpretability in scientific machine learning.
Researchers in the physical sciences not only seek predictive models but also aim to understand the fundamental principles governing systems.
Existing definitions of interpretability in equation discovery and symbolic regression tend to equate sparsity with interpretability, which is challenged by the proposed operational definition emphasizing understanding mechanisms over mathematical sparsity.
A precise and philosophically informed definition of interpretability in scientific machine learning can help overcome obstacles and advance towards a data-driven scientific future.