Kolmogorov-Arnold Networks (KAN), inspired by the Kolmogorov-Arnold representation theorem, have shown promising capabilities in modeling complex nonlinear relationships.
Experiments comparing KANs to traditional Multilayer Perceptrons (MLPs) within federated learning (FL) frameworks across diverse datasets demonstrate that KANs outperform MLPs in accuracy, stability, and convergence efficiency.
KANs exhibit robustness under varying client numbers and non-IID data distributions, maintaining superior performance even with increased client heterogeneity.
KANs require fewer communication rounds to converge compared to MLPs in federated learning scenarios, showing efficiency. Trimmed mean and FedProx are effective parameter aggregation strategies for optimizing KAN performance in FL tasks.