menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

ExplainBen...
source image

Arxiv

2d

read

70

img
dot

Image Credit: Arxiv

ExplainBench: A Benchmark Framework for Local Model Explanations in Fairness-Critical Applications

  • ExplainBench is an open-source benchmarking suite designed for evaluating local model explanations in critical domains like criminal justice, finance, and healthcare.
  • It aims to standardize and facilitate the comparative assessment of explanation techniques like SHAP, LIME, and counterfactual methods, especially in fairness-sensitive contexts.
  • ExplainBench offers unified wrappers for explanation algorithms, integrates pipelines for model training and explanation generation, and supports evaluation using metrics like fidelity, sparsity, and robustness.
  • This framework includes a graphical interface for interactive exploration, is packaged as a Python module, and is demonstrated on datasets like COMPAS, UCI Adult Income, and LendingClub to showcase its utility in promoting interpretable machine learning and accountability in AI systems.

Read Full Article

like

4 Likes

For uninterrupted reading, download the app