The paper proposes a framework called PCDP-SGD to improve the convergence of Differentially Private Stochastic Gradient Descent (DP-SGD).
PCDP-SGD compresses redundant gradient norms and preserves crucial top gradient components through a projection operation before gradient clipping.
The framework is extended to differential privacy federated learning (DPFL) to tackle data heterogeneity and achieve efficient communication.
Experimental results show that PCDP-SGD achieves higher accuracy compared to other DP-SGD variants and outperforms current federated learning frameworks in computer vision tasks.