The Rashomon effect in machine learning highlights the variations in how different models achieve similar performance while explaining relationships differently.
Even interpretable models like Generalized Additive Models (GAMs) can exhibit multiple configurations with similar performance, prompting the need for personalized adjustments for interpretability.
A study developed an approach using contextual bandits to personalize GAM configurations based on users' interpretability needs.
Results from an online experiment with 108 users showed that personalization resulted in individualized model configurations without compromising interpretability, offering insights into the potential of personalized interpretable machine learning.