Graph Neural Networks (GNNs) have shown remarkable success in molecular tasks, yet their interpretability remains challenging.
To address limitations in traditional model-level explanation methods, an innovative approach called MAGE (Motif-based GNN Explainer) has been introduced.
MAGE uses motifs as fundamental units for generating explanations and incorporates critical substructures into the explanations.
The effectiveness of MAGE has been demonstrated through quantitative and qualitative assessments on real-world molecular datasets.