Differential privacy (DP) is crucial for safeguarding sensitive patient data in medical deep learning.
Balancing privacy, utility, and fairness has become a significant challenge as clinical models become more data-dependent.
A scoping review synthesized recent developments in using DP for medical deep learning, focusing on DP-SGD and alternative mechanisms in centralized and federated settings.
While DP can maintain performance in structured imaging tasks, strict privacy measures often lead to significant performance degradation, especially in underrepresented or complex data modalities.