Graph unlearning is essential for protecting user privacy by removing user data influence from trained graph models.
Recent developments in graph unlearning methods have focused on maintaining model performance while deleting user information, but changes in prediction distribution across sensitive groups have been observed.
Study shows that graph unlearning introduces bias, and a fair graph unlearning method, FGU, is proposed to address this issue.
FGU ensures privacy by training shard models on partitioned subgraphs and fairness by employing a bi-level debiasing process, achieving superior fairness without compromising privacy and accuracy.