Explaining machine learning models using eXplainable AI (XAI) techniques has become essential in high-stakes domains like healthcare.
A comparative analysis of Shapley Additive Explanations (SHAP) and Gradient-weighted Class Activation Mapping (GradCAM) methods in human activity recognition (HAR) is presented.
The study evaluates these methods on skeleton-based data from real-world datasets and provides insights into their strengths, limitations, and differences.
SHAP provides detailed feature attribution, while GradCAM delivers faster, spatially oriented explanations, making them complementary for different applications.