This paper introduces an enhanced asynchronous AdaBoost framework for federated learning (FL) with applications in computer vision, blockchain, mobile personalization, IoT anomaly detection, and healthcare diagnostics.
The algorithm incorporates adaptive communication scheduling and delayed weight compensation to reduce synchronization frequency and communication overhead while maintaining or enhancing model accuracy.
The study evaluates improvements in communication efficiency, scalability, convergence, and robustness in each domain through comparative metrics such as training time, communication overhead, convergence iterations, and classification accuracy.
Empirical results demonstrate notable reductions in training time (20-35%) and communication overhead (30-40%) compared to baseline AdaBoost, with faster convergence in boosting rounds.
The research provides mathematical formulations for adaptive scheduling and error-driven synchronization thresholds, illustrating enhanced efficiency and robustness in various FL scenarios.