menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

AugMixCloa...
source image

Arxiv

2d

read

350

img
dot

Image Credit: Arxiv

AugMixCloak: A Defense against Membership Inference Attacks via Image Transformation

  • Traditional machine learning (ML) raises privacy concerns, while federated learning (FL) aims to protect data. However, FL training processes can still leak sensitive information, leading to privacy risks.
  • A two-stage defense method called AugMixCloak has been introduced to address membership inference attacks (MIA) in FL. This defense combines data augmentation and PCA-based information fusion, along with perceptual hashing, to protect query images.
  • Experimental results indicate that AugMixCloak effectively defends against both binary classifier-based MIA and metric-based MIA across multiple datasets and decentralized FL topologies.
  • Compared to other defense mechanisms like regularization-based approaches, AugMixCloak demonstrates stronger protection and better generalization abilities compared to confidence score masking.

Read Full Article

like

21 Likes

For uninterrupted reading, download the app