menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

When Can Y...
source image

Arxiv

1M

read

108

img
dot

Image Credit: Arxiv

When Can You Trust Your Explanations? A Robustness Analysis on Feature Importances

  • Recent legislative regulations have emphasized the need for accountable and transparent artificial intelligence systems.
  • However, the lack of standardized criteria to validate explanation methodologies is hindering the development of trustworthy systems.
  • This study focuses on the robustness of explanations in Explainable Artificial Intelligence (XAI) to ensure trust in both the system and the provided explanation.
  • The authors propose a novel approach to analyze the robustness of neural network explanations and present an ensemble method to aggregate various explanations.

Read Full Article

like

6 Likes

For uninterrupted reading, download the app