Traditional machine learning (ML) raises privacy concerns, while federated learning (FL) aims to protect data. However, FL training processes can still leak sensitive information, leading to privacy risks.
A two-stage defense method called AugMixCloak has been introduced to address membership inference attacks (MIA) in FL. This defense combines data augmentation and PCA-based information fusion, along with perceptual hashing, to protect query images.
Experimental results indicate that AugMixCloak effectively defends against both binary classifier-based MIA and metric-based MIA across multiple datasets and decentralized FL topologies.
Compared to other defense mechanisms like regularization-based approaches, AugMixCloak demonstrates stronger protection and better generalization abilities compared to confidence score masking.