Federated learning (FL) offers a solution for collaborative model training while preserving data privacy in decentralized client datasets.
Challenges like noisy labels, missing classes, and imbalanced distributions affect the effectiveness of FL.
A new methodology is proposed to address data quality issues in FL by enhancing data integrity through noise cleaning, synthetic data generation, and robust model training.
Experimental evaluations on MNIST and Fashion-MNIST datasets show improved model performance, especially in noise and class imbalance scenarios, ensuring data privacy and practicality for edge devices.