Differentially Private Stochastic Gradient Descent (DP-SGD) is widely used for privacy-preserving deep learning.
The selection of the optimal clipping threshold C in DP-SGD poses a challenge, resulting in privacy and computational overhead.
A new framework called Dynamic Clipping DP-SGD (DC-SGD) is proposed, leveraging differentially private histograms to estimate gradient norm distributions and adjust the clipping threshold C dynamically.
Experimental results show that DC-SGD achieves up to 9 times acceleration in hyperparameter tuning compared to DP-SGD, with improved accuracy and privacy guarantees.