Explaining machine learning (ML) models using eXplainable AI (XAI) techniques has become essential to make them more transparent and trustworthy.
A comparative analysis of Shapley Additive Explanations (SHAP) and Gradient-weighted Class Activation Mapping (Grad-CAM) methods in human activity recognition (HAR) is presented.
The study evaluates these methods on real-world datasets, providing insights into their strengths, limitations, and differences.
SHAP and Grad-CAM can complement each other to provide more interpretable and actionable model explanations, enhancing trust and transparency in ML models.