Machine learning explainability focuses on making black-box models more transparent by identifying important input features for predictions.
Recent research suggests creating explanations using semantic concepts to enhance interpretability, but often overlooks the communication context.
This study introduces listener-adaptive explanations, which consider the user's understanding to maximize communicative utility.
The proposed method, based on pragmatic reasoning, showed improved alignment with listener preferences in image classification tasks and increased communicative utility in user studies.