menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Enabling G...
source image

Arxiv

2d

read

289

img
dot

Image Credit: Arxiv

Enabling Group Fairness in Graph Unlearning via Bi-level Debiasing

  • Graph unlearning is essential for protecting user privacy by removing user data influence from trained graph models.
  • Recent developments in graph unlearning methods have focused on maintaining model performance while deleting user information, but changes in prediction distribution across sensitive groups have been observed.
  • Study shows that graph unlearning introduces bias, and a fair graph unlearning method, FGU, is proposed to address this issue.
  • FGU ensures privacy by training shard models on partitioned subgraphs and fairness by employing a bi-level debiasing process, achieving superior fairness without compromising privacy and accuracy.

Read Full Article

like

17 Likes

For uninterrupted reading, download the app