menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Explaining...
source image

Arxiv

2d

read

63

img
dot

Image Credit: Arxiv

Explaining Concept Shift with Interpretable Feature Attribution

  • Machine learning models can face a decrease in performance when faced with data that differs from their training set, known as concept shift.
  • Concept shift occurs when there is a change in the distribution of labels conditioned on features, leading to the learning of an incorrect representation by even well-tuned ML models.
  • A model called SGShift is proposed for detecting concept shift in tabular data and attributing reduced model performance to shifted features using a Generalized Additive Model (GAM) and feature selection.
  • Experiments show that SGShift can identify shifted features with high accuracy, outperforming baseline methods with AUC >0.9 and recall >90%.

Read Full Article

like

3 Likes

For uninterrupted reading, download the app