KANs offer a novel framework for function approximation by replacing traditional neural network weights with learnable univariate functions.
To improve memory efficiency and training costs associated with KANs, a smaller meta-learner named MetaKANs is proposed to generate weights for KANs.
By training KANs and MetaKANs together in an end-to-end differentiable manner, MetaKANs achieve comparable or superior performance with fewer trainable parameters.
Experiments on various tasks show that MetaKANs can enhance parameter efficiency and reduce memory usage, providing a more scalable and cost-effective training method for KANs.