Sparse shrunk additive models and sparse random feature models have been developed separately as methods to learn low-order functions, where there are few interactions between variables.
Inspired by the success of the iterative magnitude pruning technique in finding lottery tickets of neural networks, a new method called Sparser Random Feature Models via IMP (ShRIMP) is proposed to efficiently fit high-dimensional data with sparse variable dependencies.
ShRIMP combines the process of constructing and finding sparse lottery tickets for two-layer dense networks.
Experimental results show that ShRIMP achieves better or comparable test accuracy compared to other sparse feature and additive methods, while offering feature selection with low computational complexity.