A recent paper delves into the importance of trustworthy explanations in machine learning models, especially in high-risk domains.
Interpretable models like rule-based models, such as decision trees, are commonly used in high-risk applications despite their inherent shortcomings.
The paper examines the negative aspects of rule-based models like negative overlap and redundancy and proposes algorithms to analyze and address these issues.
It concludes that existing tools for learning rule-based ML models often lead to rule sets that exhibit these undesirable characteristics.