menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Whence Is ...
source image

Arxiv

1d

read

167

img
dot

Image Credit: Arxiv

Whence Is A Model Fair? Fixing Fairness Bugs via Propensity Score Matching

  • Fairness-aware learning aims to mitigate discrimination against specific protected social groups.
  • Training and test data sampling can affect the reliability of reported fairness metrics.
  • FairMatch, a post-processing method, utilizes propensity score matching to evaluate and mitigate bias.
  • Experimental results show that FairMatch improves fairness evaluation and mitigation without sacrificing predictive performance.

Read Full Article

like

10 Likes

For uninterrupted reading, download the app