Implementing differential privacy in relational learning is important for protecting the privacy of individual entities in sensitive domains.
Differential Privacy (DP) provides a structured approach to quantify privacy risks, with DP-SGD being commonly used for private model training.
Challenges in applying DP-SGD to relational learning include high sensitivity due to entities participating in multiple relations and complex sampling procedures.
This work introduces a framework for relational learning with formal entity-level DP guarantees, including sensitivity analysis, adaptive gradient clipping, and privacy amplification for coupled sampling procedures.