Explainable Artificial Intelligence (XAI) is crucial for fostering trust and detecting potential misbehavior of opaque models.
LIME (Local Interpretable Model-agnostic Explanations) is a popular model-agnostic approach for generating explanations of black-box models.
LIME faces challenges related to fidelity, stability, and applicability to domain-specific problems.
A survey has been conducted to comprehensively explore and collect LIME's foundational concepts and known limitations, categorize and compare its enhancements, and offer a structured taxonomy for future research and practical application.