menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

On Model P...
source image

Arxiv

4d

read

290

img
dot

Image Credit: Arxiv

On Model Protection in Federated Learning against Eavesdropping Attacks

  • This study investigates the protection offered by federated learning algorithms against eavesdropping adversaries.
  • The focus of the research is on safeguarding the client model itself.
  • The study examines various factors that impact the level of protection, such as client selection, local objective functions, global aggregation, and eavesdropper's capabilities.
  • The results highlight the limitations of methods based on differential privacy in this context.

Read Full Article

like

17 Likes

For uninterrupted reading, download the app