This study focuses on the energy implications of federated learning (FL) within Artificial Intelligence of Things (AIoT) scenarios.
The research examines the energy consumed during the FL process, highlighting pre-processing, communication, and local learning as the main energy-intensive processes.
The study proposes two clustering-informed methods for device/client selection in distributed AIoT settings, aiming to speed up model training convergence.
Through extensive numerical experimentation, the clustering strategies show high convergence rates and low energy consumption compared to other recent approaches in the literature.