Recent legislative regulations have emphasized the need for accountable and transparent artificial intelligence systems.
However, the lack of standardized criteria to validate explanation methodologies is hindering the development of trustworthy systems.
This study focuses on the robustness of explanations in Explainable Artificial Intelligence (XAI) to ensure trust in both the system and the provided explanation.
The authors propose a novel approach to analyze the robustness of neural network explanations and present an ensemble method to aggregate various explanations.