Integrating AI in healthcare can greatly improve patient care and system efficiency, but the lack of explainability in AI systems hinders their clinical adoption.
Existing explainability methods for AI models in healthcare are limited to unimodal settings and fail to capture cross-modal interactions.
This paper introduces InterSHAP, a cross-modal interaction score that quantifies the contributions of individual modalities and their interactions without approximations.
InterSHAP accurately measures cross-modal interactions, handles multiple modalities, and provides detailed explanations for individual samples in multimodal medical datasets.