The paper introduces a method called Mixture of Rule Experts guided by a Large Language Model (MoRE-LLM) to align machine learning models with human domain knowledge.
MoRE-LLM combines a data-driven black-box model with knowledge extracted from a Large Language Model (LLM) to enable domain knowledge-aligned and transparent predictions.
The Mixture of Rule Experts (MoRE) generates local rule-based surrogates during training and utilizes them for the classification task, while the LLM enhances domain knowledge alignment and provides context to the rules.
The proposed method ensures interpretability and is evaluated on various datasets, comparing its performance with interpretable and non-interpretable baselines.