menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Trustworth...
source image

Arxiv

4d

read

101

img
dot

Image Credit: Arxiv

Trustworthiness of Stochastic Gradient Descent in Distributed Learning

  • Distributed learning (DL) uses multiple nodes to accelerate training, enabling efficient optimization of large-scale models.
  • Stochastic Gradient Descent (SGD) is a key optimization algorithm in DL but communication bottlenecks limit scalability and efficiency.
  • Compressed SGD techniques are used to address communication overheads, but they introduce trustworthiness concerns, including attacks like gradient inversion and membership inference attacks.
  • Empirical studies show that compressed SGD demonstrates higher resistance to privacy leakage compared to uncompressed SGD, and the reliability of using membership inference attacks as a metric for assessing privacy risks in distributed learning is questionable.

Read Full Article

like

6 Likes

For uninterrupted reading, download the app