Distributed learning (DL) uses multiple nodes to accelerate training, enabling efficient optimization of large-scale models.
Stochastic Gradient Descent (SGD) is a key optimization algorithm in DL but communication bottlenecks limit scalability and efficiency.
Compressed SGD techniques are used to address communication overheads, but they introduce trustworthiness concerns, including attacks like gradient inversion and membership inference attacks.
Empirical studies show that compressed SGD demonstrates higher resistance to privacy leakage compared to uncompressed SGD, and the reliability of using membership inference attacks as a metric for assessing privacy risks in distributed learning is questionable.