This study investigates the protection offered by federated learning algorithms against eavesdropping adversaries.
The focus of the research is on safeguarding the client model itself.
The study examines various factors that impact the level of protection, such as client selection, local objective functions, global aggregation, and eavesdropper's capabilities.
The results highlight the limitations of methods based on differential privacy in this context.