The right to be forgotten regulations have led to advancements in certified unlearning strategies for graph data to comply with data removal requests from machine learning models.
A training-free unlearning procedure is developed for pre-trained graph ML models to mitigate biases, offering certifiable bias mitigation through a single-step Newton update on model weights.
The approach provides a computationally lightweight alternative to existing fairness enhancement methods, with quantifiable performance guarantees.
Experimental results demonstrate the efficacy of the developed unlearning strategies in mitigating biases while maintaining minimal impact on node classification accuracy, showcasing favorable utility-complexity trade-offs compared to retraining models from scratch.